00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 142 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3643 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.015 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.016 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.029 Fetching changes from the remote Git repository 00:00:00.030 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.044 Using shallow fetch with depth 1 00:00:00.044 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.044 > git --version # timeout=10 00:00:00.058 > git --version # 'git version 2.39.2' 00:00:00.058 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.073 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.073 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.673 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.684 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.695 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.695 > git config core.sparsecheckout # timeout=10 00:00:02.704 > git read-tree -mu HEAD # timeout=10 00:00:02.718 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.734 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.734 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.819 [Pipeline] Start of Pipeline 00:00:02.831 [Pipeline] library 00:00:02.832 Loading library shm_lib@master 00:00:02.832 Library shm_lib@master is cached. Copying from home. 00:00:02.844 [Pipeline] node 00:00:02.854 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.857 [Pipeline] { 00:00:02.864 [Pipeline] catchError 00:00:02.865 [Pipeline] { 00:00:02.873 [Pipeline] wrap 00:00:02.880 [Pipeline] { 00:00:02.887 [Pipeline] stage 00:00:02.889 [Pipeline] { (Prologue) 00:00:02.901 [Pipeline] echo 00:00:02.902 Node: VM-host-WFP7 00:00:02.906 [Pipeline] cleanWs 00:00:02.917 [WS-CLEANUP] Deleting project workspace... 00:00:02.917 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.924 [WS-CLEANUP] done 00:00:03.098 [Pipeline] setCustomBuildProperty 00:00:03.210 [Pipeline] httpRequest 00:00:03.530 [Pipeline] echo 00:00:03.531 Sorcerer 10.211.164.20 is alive 00:00:03.540 [Pipeline] retry 00:00:03.542 [Pipeline] { 00:00:03.556 [Pipeline] httpRequest 00:00:03.561 HttpMethod: GET 00:00:03.562 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.562 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.563 Response Code: HTTP/1.1 200 OK 00:00:03.564 Success: Status code 200 is in the accepted range: 200,404 00:00:03.564 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.710 [Pipeline] } 00:00:03.728 [Pipeline] // retry 00:00:03.733 [Pipeline] sh 00:00:04.013 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.030 [Pipeline] httpRequest 00:00:04.595 [Pipeline] echo 00:00:04.597 Sorcerer 10.211.164.20 is alive 00:00:04.604 [Pipeline] retry 00:00:04.606 [Pipeline] { 00:00:04.618 [Pipeline] httpRequest 00:00:04.622 HttpMethod: GET 00:00:04.623 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.623 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.625 Response Code: HTTP/1.1 200 OK 00:00:04.625 Success: Status code 200 is in the accepted range: 200,404 00:00:04.625 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:23.380 [Pipeline] } 00:00:23.398 [Pipeline] // retry 00:00:23.405 [Pipeline] sh 00:00:23.691 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:26.252 [Pipeline] sh 00:00:26.538 + git -C spdk log --oneline -n5 00:00:26.538 b18e1bd62 version: v24.09.1-pre 00:00:26.538 19524ad45 version: v24.09 00:00:26.538 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:26.538 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:26.538 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:26.559 [Pipeline] withCredentials 00:00:26.571 > git --version # timeout=10 00:00:26.586 > git --version # 'git version 2.39.2' 00:00:26.606 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:26.608 [Pipeline] { 00:00:26.617 [Pipeline] retry 00:00:26.619 [Pipeline] { 00:00:26.635 [Pipeline] sh 00:00:26.920 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:27.195 [Pipeline] } 00:00:27.214 [Pipeline] // retry 00:00:27.219 [Pipeline] } 00:00:27.235 [Pipeline] // withCredentials 00:00:27.246 [Pipeline] httpRequest 00:00:27.640 [Pipeline] echo 00:00:27.642 Sorcerer 10.211.164.20 is alive 00:00:27.653 [Pipeline] retry 00:00:27.655 [Pipeline] { 00:00:27.670 [Pipeline] httpRequest 00:00:27.675 HttpMethod: GET 00:00:27.676 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:27.676 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:27.681 Response Code: HTTP/1.1 200 OK 00:00:27.682 Success: Status code 200 is in the accepted range: 200,404 00:00:27.683 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.261 [Pipeline] } 00:01:25.277 [Pipeline] // retry 00:01:25.284 [Pipeline] sh 00:01:25.568 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:26.965 [Pipeline] sh 00:01:27.252 + git -C dpdk log --oneline -n5 00:01:27.252 eeb0605f11 version: 23.11.0 00:01:27.252 238778122a doc: update release notes for 23.11 00:01:27.252 46aa6b3cfc doc: fix description of RSS features 00:01:27.252 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:27.252 7e421ae345 devtools: support skipping forbid rule check 00:01:27.274 [Pipeline] writeFile 00:01:27.292 [Pipeline] sh 00:01:27.583 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:27.598 [Pipeline] sh 00:01:27.930 + cat autorun-spdk.conf 00:01:27.930 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.930 SPDK_RUN_ASAN=1 00:01:27.930 SPDK_RUN_UBSAN=1 00:01:27.930 SPDK_TEST_RAID=1 00:01:27.930 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:27.930 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:27.930 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.939 RUN_NIGHTLY=1 00:01:27.941 [Pipeline] } 00:01:27.957 [Pipeline] // stage 00:01:27.972 [Pipeline] stage 00:01:27.974 [Pipeline] { (Run VM) 00:01:27.987 [Pipeline] sh 00:01:28.271 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:28.272 + echo 'Start stage prepare_nvme.sh' 00:01:28.272 Start stage prepare_nvme.sh 00:01:28.272 + [[ -n 1 ]] 00:01:28.272 + disk_prefix=ex1 00:01:28.272 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:28.272 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:28.272 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:28.272 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.272 ++ SPDK_RUN_ASAN=1 00:01:28.272 ++ SPDK_RUN_UBSAN=1 00:01:28.272 ++ SPDK_TEST_RAID=1 00:01:28.272 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:28.272 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:28.272 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.272 ++ RUN_NIGHTLY=1 00:01:28.272 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:28.272 + nvme_files=() 00:01:28.272 + declare -A nvme_files 00:01:28.272 + backend_dir=/var/lib/libvirt/images/backends 00:01:28.272 + nvme_files['nvme.img']=5G 00:01:28.272 + nvme_files['nvme-cmb.img']=5G 00:01:28.272 + nvme_files['nvme-multi0.img']=4G 00:01:28.272 + nvme_files['nvme-multi1.img']=4G 00:01:28.272 + nvme_files['nvme-multi2.img']=4G 00:01:28.272 + nvme_files['nvme-openstack.img']=8G 00:01:28.272 + nvme_files['nvme-zns.img']=5G 00:01:28.272 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:28.272 + (( SPDK_TEST_FTL == 1 )) 00:01:28.272 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:28.272 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:28.272 + for nvme in "${!nvme_files[@]}" 00:01:28.272 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:28.272 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.272 + for nvme in "${!nvme_files[@]}" 00:01:28.272 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:28.272 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.272 + for nvme in "${!nvme_files[@]}" 00:01:28.272 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:28.272 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:28.272 + for nvme in "${!nvme_files[@]}" 00:01:28.272 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:28.272 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.272 + for nvme in "${!nvme_files[@]}" 00:01:28.272 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:28.272 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.272 + for nvme in "${!nvme_files[@]}" 00:01:28.272 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:28.272 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.272 + for nvme in "${!nvme_files[@]}" 00:01:28.272 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:28.533 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.533 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:28.533 + echo 'End stage prepare_nvme.sh' 00:01:28.533 End stage prepare_nvme.sh 00:01:28.546 [Pipeline] sh 00:01:28.834 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:28.834 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:28.834 00:01:28.834 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:28.834 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:28.834 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:28.834 HELP=0 00:01:28.834 DRY_RUN=0 00:01:28.834 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:28.834 NVME_DISKS_TYPE=nvme,nvme, 00:01:28.834 NVME_AUTO_CREATE=0 00:01:28.834 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:28.834 NVME_CMB=,, 00:01:28.834 NVME_PMR=,, 00:01:28.834 NVME_ZNS=,, 00:01:28.834 NVME_MS=,, 00:01:28.834 NVME_FDP=,, 00:01:28.834 SPDK_VAGRANT_DISTRO=fedora39 00:01:28.834 SPDK_VAGRANT_VMCPU=10 00:01:28.834 SPDK_VAGRANT_VMRAM=12288 00:01:28.834 SPDK_VAGRANT_PROVIDER=libvirt 00:01:28.834 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:28.834 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:28.834 SPDK_OPENSTACK_NETWORK=0 00:01:28.834 VAGRANT_PACKAGE_BOX=0 00:01:28.834 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:28.834 FORCE_DISTRO=true 00:01:28.834 VAGRANT_BOX_VERSION= 00:01:28.834 EXTRA_VAGRANTFILES= 00:01:28.834 NIC_MODEL=virtio 00:01:28.834 00:01:28.834 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:28.834 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:30.743 Bringing machine 'default' up with 'libvirt' provider... 00:01:31.003 ==> default: Creating image (snapshot of base box volume). 00:01:31.263 ==> default: Creating domain with the following settings... 00:01:31.263 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731970610_c60b04e9fb68adfa614e 00:01:31.263 ==> default: -- Domain type: kvm 00:01:31.263 ==> default: -- Cpus: 10 00:01:31.263 ==> default: -- Feature: acpi 00:01:31.263 ==> default: -- Feature: apic 00:01:31.263 ==> default: -- Feature: pae 00:01:31.263 ==> default: -- Memory: 12288M 00:01:31.263 ==> default: -- Memory Backing: hugepages: 00:01:31.263 ==> default: -- Management MAC: 00:01:31.263 ==> default: -- Loader: 00:01:31.263 ==> default: -- Nvram: 00:01:31.263 ==> default: -- Base box: spdk/fedora39 00:01:31.263 ==> default: -- Storage pool: default 00:01:31.263 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731970610_c60b04e9fb68adfa614e.img (20G) 00:01:31.263 ==> default: -- Volume Cache: default 00:01:31.263 ==> default: -- Kernel: 00:01:31.263 ==> default: -- Initrd: 00:01:31.263 ==> default: -- Graphics Type: vnc 00:01:31.263 ==> default: -- Graphics Port: -1 00:01:31.263 ==> default: -- Graphics IP: 127.0.0.1 00:01:31.263 ==> default: -- Graphics Password: Not defined 00:01:31.263 ==> default: -- Video Type: cirrus 00:01:31.263 ==> default: -- Video VRAM: 9216 00:01:31.263 ==> default: -- Sound Type: 00:01:31.263 ==> default: -- Keymap: en-us 00:01:31.263 ==> default: -- TPM Path: 00:01:31.263 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:31.263 ==> default: -- Command line args: 00:01:31.263 ==> default: -> value=-device, 00:01:31.264 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:31.264 ==> default: -> value=-drive, 00:01:31.264 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:31.264 ==> default: -> value=-device, 00:01:31.264 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.264 ==> default: -> value=-device, 00:01:31.264 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:31.264 ==> default: -> value=-drive, 00:01:31.264 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:31.264 ==> default: -> value=-device, 00:01:31.264 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.264 ==> default: -> value=-drive, 00:01:31.264 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:31.264 ==> default: -> value=-device, 00:01:31.264 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.264 ==> default: -> value=-drive, 00:01:31.264 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:31.264 ==> default: -> value=-device, 00:01:31.264 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.264 ==> default: Creating shared folders metadata... 00:01:31.264 ==> default: Starting domain. 00:01:33.173 ==> default: Waiting for domain to get an IP address... 00:01:48.069 ==> default: Waiting for SSH to become available... 00:01:49.453 ==> default: Configuring and enabling network interfaces... 00:01:56.033 default: SSH address: 192.168.121.232:22 00:01:56.033 default: SSH username: vagrant 00:01:56.033 default: SSH auth method: private key 00:01:58.576 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:06.710 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:13.298 ==> default: Mounting SSHFS shared folder... 00:02:14.683 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:14.683 ==> default: Checking Mount.. 00:02:16.594 ==> default: Folder Successfully Mounted! 00:02:16.594 ==> default: Running provisioner: file... 00:02:17.535 default: ~/.gitconfig => .gitconfig 00:02:17.795 00:02:17.795 SUCCESS! 00:02:17.795 00:02:17.795 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:17.795 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:17.795 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:17.795 00:02:17.805 [Pipeline] } 00:02:17.821 [Pipeline] // stage 00:02:17.831 [Pipeline] dir 00:02:17.832 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:17.834 [Pipeline] { 00:02:17.847 [Pipeline] catchError 00:02:17.849 [Pipeline] { 00:02:17.862 [Pipeline] sh 00:02:18.147 + vagrant ssh-config --host vagrant 00:02:18.147 + sed -ne /^Host/,$p 00:02:18.147 + tee ssh_conf 00:02:20.687 Host vagrant 00:02:20.687 HostName 192.168.121.232 00:02:20.687 User vagrant 00:02:20.687 Port 22 00:02:20.687 UserKnownHostsFile /dev/null 00:02:20.687 StrictHostKeyChecking no 00:02:20.687 PasswordAuthentication no 00:02:20.687 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:20.687 IdentitiesOnly yes 00:02:20.687 LogLevel FATAL 00:02:20.687 ForwardAgent yes 00:02:20.687 ForwardX11 yes 00:02:20.687 00:02:20.702 [Pipeline] withEnv 00:02:20.705 [Pipeline] { 00:02:20.719 [Pipeline] sh 00:02:21.004 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:21.004 source /etc/os-release 00:02:21.004 [[ -e /image.version ]] && img=$(< /image.version) 00:02:21.004 # Minimal, systemd-like check. 00:02:21.004 if [[ -e /.dockerenv ]]; then 00:02:21.004 # Clear garbage from the node's name: 00:02:21.004 # agt-er_autotest_547-896 -> autotest_547-896 00:02:21.004 # $HOSTNAME is the actual container id 00:02:21.004 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:21.004 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:21.004 # We can assume this is a mount from a host where container is running, 00:02:21.004 # so fetch its hostname to easily identify the target swarm worker. 00:02:21.004 container="$(< /etc/hostname) ($agent)" 00:02:21.004 else 00:02:21.004 # Fallback 00:02:21.004 container=$agent 00:02:21.004 fi 00:02:21.004 fi 00:02:21.004 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:21.004 00:02:21.278 [Pipeline] } 00:02:21.294 [Pipeline] // withEnv 00:02:21.303 [Pipeline] setCustomBuildProperty 00:02:21.319 [Pipeline] stage 00:02:21.321 [Pipeline] { (Tests) 00:02:21.338 [Pipeline] sh 00:02:21.631 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:21.906 [Pipeline] sh 00:02:22.190 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:22.466 [Pipeline] timeout 00:02:22.467 Timeout set to expire in 1 hr 30 min 00:02:22.469 [Pipeline] { 00:02:22.484 [Pipeline] sh 00:02:22.768 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:23.338 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:23.352 [Pipeline] sh 00:02:23.639 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:23.916 [Pipeline] sh 00:02:24.204 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:24.482 [Pipeline] sh 00:02:24.769 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:25.029 ++ readlink -f spdk_repo 00:02:25.029 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:25.029 + [[ -n /home/vagrant/spdk_repo ]] 00:02:25.029 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:25.029 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:25.029 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:25.029 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:25.029 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:25.029 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:25.029 + cd /home/vagrant/spdk_repo 00:02:25.029 + source /etc/os-release 00:02:25.029 ++ NAME='Fedora Linux' 00:02:25.029 ++ VERSION='39 (Cloud Edition)' 00:02:25.029 ++ ID=fedora 00:02:25.029 ++ VERSION_ID=39 00:02:25.029 ++ VERSION_CODENAME= 00:02:25.029 ++ PLATFORM_ID=platform:f39 00:02:25.029 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:25.029 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:25.029 ++ LOGO=fedora-logo-icon 00:02:25.029 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:25.029 ++ HOME_URL=https://fedoraproject.org/ 00:02:25.029 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:25.029 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:25.029 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:25.029 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:25.029 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:25.029 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:25.029 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:25.029 ++ SUPPORT_END=2024-11-12 00:02:25.029 ++ VARIANT='Cloud Edition' 00:02:25.029 ++ VARIANT_ID=cloud 00:02:25.029 + uname -a 00:02:25.029 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:25.029 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:25.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:25.598 Hugepages 00:02:25.598 node hugesize free / total 00:02:25.598 node0 1048576kB 0 / 0 00:02:25.598 node0 2048kB 0 / 0 00:02:25.598 00:02:25.598 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:25.598 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:25.598 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:25.598 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:25.598 + rm -f /tmp/spdk-ld-path 00:02:25.598 + source autorun-spdk.conf 00:02:25.598 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.598 ++ SPDK_RUN_ASAN=1 00:02:25.598 ++ SPDK_RUN_UBSAN=1 00:02:25.598 ++ SPDK_TEST_RAID=1 00:02:25.598 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:25.598 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:25.598 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.598 ++ RUN_NIGHTLY=1 00:02:25.598 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:25.598 + [[ -n '' ]] 00:02:25.598 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:25.598 + for M in /var/spdk/build-*-manifest.txt 00:02:25.598 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:25.598 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.598 + for M in /var/spdk/build-*-manifest.txt 00:02:25.598 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:25.598 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.859 + for M in /var/spdk/build-*-manifest.txt 00:02:25.859 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:25.859 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.859 ++ uname 00:02:25.859 + [[ Linux == \L\i\n\u\x ]] 00:02:25.859 + sudo dmesg -T 00:02:25.859 + sudo dmesg --clear 00:02:25.859 + dmesg_pid=6156 00:02:25.859 + sudo dmesg -Tw 00:02:25.859 + [[ Fedora Linux == FreeBSD ]] 00:02:25.859 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.859 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.859 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:25.859 + [[ -x /usr/src/fio-static/fio ]] 00:02:25.859 + export FIO_BIN=/usr/src/fio-static/fio 00:02:25.859 + FIO_BIN=/usr/src/fio-static/fio 00:02:25.859 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:25.859 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:25.859 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:25.859 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.859 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.859 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:25.859 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.859 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.859 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:25.859 Test configuration: 00:02:25.859 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.859 SPDK_RUN_ASAN=1 00:02:25.859 SPDK_RUN_UBSAN=1 00:02:25.859 SPDK_TEST_RAID=1 00:02:25.859 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:25.859 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:25.859 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.859 RUN_NIGHTLY=1 22:57:45 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:25.859 22:57:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:25.859 22:57:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:25.859 22:57:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:25.859 22:57:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.859 22:57:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.859 22:57:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.859 22:57:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.859 22:57:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.859 22:57:45 -- paths/export.sh@5 -- $ export PATH 00:02:25.859 22:57:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.859 22:57:45 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:25.859 22:57:45 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:26.120 22:57:45 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731970665.XXXXXX 00:02:26.120 22:57:45 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731970665.7yPL93 00:02:26.120 22:57:45 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:26.120 22:57:45 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:26.120 22:57:45 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:26.120 22:57:45 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:26.120 22:57:45 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:26.120 22:57:45 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:26.120 22:57:45 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:26.120 22:57:45 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:26.120 22:57:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.120 22:57:45 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:26.120 22:57:45 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:26.120 22:57:45 -- pm/common@17 -- $ local monitor 00:02:26.120 22:57:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.120 22:57:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.120 22:57:45 -- pm/common@25 -- $ sleep 1 00:02:26.120 22:57:45 -- pm/common@21 -- $ date +%s 00:02:26.120 22:57:45 -- pm/common@21 -- $ date +%s 00:02:26.120 22:57:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731970665 00:02:26.120 22:57:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731970665 00:02:26.120 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731970665_collect-vmstat.pm.log 00:02:26.120 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731970665_collect-cpu-load.pm.log 00:02:27.061 22:57:46 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:27.061 22:57:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:27.061 22:57:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:27.061 22:57:46 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:27.061 22:57:46 -- spdk/autobuild.sh@16 -- $ date -u 00:02:27.061 Mon Nov 18 10:57:46 PM UTC 2024 00:02:27.061 22:57:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:27.061 v24.09-1-gb18e1bd62 00:02:27.061 22:57:46 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:27.061 22:57:46 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:27.061 22:57:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:27.061 22:57:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.061 22:57:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.061 ************************************ 00:02:27.061 START TEST asan 00:02:27.061 ************************************ 00:02:27.061 using asan 00:02:27.061 22:57:46 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:27.061 00:02:27.061 real 0m0.001s 00:02:27.061 user 0m0.000s 00:02:27.061 sys 0m0.000s 00:02:27.061 22:57:46 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:27.061 22:57:46 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.061 ************************************ 00:02:27.061 END TEST asan 00:02:27.061 ************************************ 00:02:27.061 22:57:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:27.061 22:57:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:27.061 22:57:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:27.062 22:57:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.062 22:57:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.062 ************************************ 00:02:27.062 START TEST ubsan 00:02:27.062 ************************************ 00:02:27.062 using ubsan 00:02:27.062 22:57:46 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:27.062 00:02:27.062 real 0m0.000s 00:02:27.062 user 0m0.000s 00:02:27.062 sys 0m0.000s 00:02:27.062 22:57:46 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:27.062 22:57:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.062 ************************************ 00:02:27.062 END TEST ubsan 00:02:27.062 ************************************ 00:02:27.322 22:57:46 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:27.322 22:57:46 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:27.322 22:57:46 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:27.322 22:57:46 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:27.322 22:57:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.322 22:57:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.322 ************************************ 00:02:27.322 START TEST build_native_dpdk 00:02:27.322 ************************************ 00:02:27.322 22:57:46 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:27.322 22:57:46 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:27.322 22:57:46 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:27.322 22:57:46 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:27.323 eeb0605f11 version: 23.11.0 00:02:27.323 238778122a doc: update release notes for 23.11 00:02:27.323 46aa6b3cfc doc: fix description of RSS features 00:02:27.323 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:27.323 7e421ae345 devtools: support skipping forbid rule check 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:27.323 patching file config/rte_config.h 00:02:27.323 Hunk #1 succeeded at 60 (offset 1 line). 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:27.323 patching file lib/pcapng/rte_pcapng.c 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:27.323 22:57:46 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:27.323 22:57:46 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:27.324 22:57:46 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:33.919 The Meson build system 00:02:33.919 Version: 1.5.0 00:02:33.919 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:33.919 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:33.919 Build type: native build 00:02:33.919 Program cat found: YES (/usr/bin/cat) 00:02:33.919 Project name: DPDK 00:02:33.919 Project version: 23.11.0 00:02:33.919 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:33.919 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:33.919 Host machine cpu family: x86_64 00:02:33.919 Host machine cpu: x86_64 00:02:33.919 Message: ## Building in Developer Mode ## 00:02:33.919 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:33.919 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:33.919 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:33.919 Program python3 found: YES (/usr/bin/python3) 00:02:33.919 Program cat found: YES (/usr/bin/cat) 00:02:33.919 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:33.919 Compiler for C supports arguments -march=native: YES 00:02:33.919 Checking for size of "void *" : 8 00:02:33.919 Checking for size of "void *" : 8 (cached) 00:02:33.919 Library m found: YES 00:02:33.919 Library numa found: YES 00:02:33.919 Has header "numaif.h" : YES 00:02:33.919 Library fdt found: NO 00:02:33.919 Library execinfo found: NO 00:02:33.919 Has header "execinfo.h" : YES 00:02:33.919 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:33.919 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:33.919 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:33.919 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:33.919 Run-time dependency openssl found: YES 3.1.1 00:02:33.919 Run-time dependency libpcap found: YES 1.10.4 00:02:33.919 Has header "pcap.h" with dependency libpcap: YES 00:02:33.919 Compiler for C supports arguments -Wcast-qual: YES 00:02:33.919 Compiler for C supports arguments -Wdeprecated: YES 00:02:33.919 Compiler for C supports arguments -Wformat: YES 00:02:33.919 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:33.919 Compiler for C supports arguments -Wformat-security: NO 00:02:33.919 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:33.919 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:33.919 Compiler for C supports arguments -Wnested-externs: YES 00:02:33.919 Compiler for C supports arguments -Wold-style-definition: YES 00:02:33.919 Compiler for C supports arguments -Wpointer-arith: YES 00:02:33.919 Compiler for C supports arguments -Wsign-compare: YES 00:02:33.919 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:33.919 Compiler for C supports arguments -Wundef: YES 00:02:33.919 Compiler for C supports arguments -Wwrite-strings: YES 00:02:33.919 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:33.919 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:33.919 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:33.919 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:33.919 Program objdump found: YES (/usr/bin/objdump) 00:02:33.919 Compiler for C supports arguments -mavx512f: YES 00:02:33.919 Checking if "AVX512 checking" compiles: YES 00:02:33.919 Fetching value of define "__SSE4_2__" : 1 00:02:33.919 Fetching value of define "__AES__" : 1 00:02:33.919 Fetching value of define "__AVX__" : 1 00:02:33.919 Fetching value of define "__AVX2__" : 1 00:02:33.919 Fetching value of define "__AVX512BW__" : 1 00:02:33.919 Fetching value of define "__AVX512CD__" : 1 00:02:33.919 Fetching value of define "__AVX512DQ__" : 1 00:02:33.919 Fetching value of define "__AVX512F__" : 1 00:02:33.919 Fetching value of define "__AVX512VL__" : 1 00:02:33.919 Fetching value of define "__PCLMUL__" : 1 00:02:33.919 Fetching value of define "__RDRND__" : 1 00:02:33.919 Fetching value of define "__RDSEED__" : 1 00:02:33.919 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:33.919 Fetching value of define "__znver1__" : (undefined) 00:02:33.919 Fetching value of define "__znver2__" : (undefined) 00:02:33.919 Fetching value of define "__znver3__" : (undefined) 00:02:33.919 Fetching value of define "__znver4__" : (undefined) 00:02:33.919 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:33.919 Message: lib/log: Defining dependency "log" 00:02:33.919 Message: lib/kvargs: Defining dependency "kvargs" 00:02:33.919 Message: lib/telemetry: Defining dependency "telemetry" 00:02:33.919 Checking for function "getentropy" : NO 00:02:33.919 Message: lib/eal: Defining dependency "eal" 00:02:33.919 Message: lib/ring: Defining dependency "ring" 00:02:33.919 Message: lib/rcu: Defining dependency "rcu" 00:02:33.919 Message: lib/mempool: Defining dependency "mempool" 00:02:33.919 Message: lib/mbuf: Defining dependency "mbuf" 00:02:33.919 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:33.919 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:33.919 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:33.920 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:33.920 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:33.920 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:33.920 Compiler for C supports arguments -mpclmul: YES 00:02:33.920 Compiler for C supports arguments -maes: YES 00:02:33.920 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:33.920 Compiler for C supports arguments -mavx512bw: YES 00:02:33.920 Compiler for C supports arguments -mavx512dq: YES 00:02:33.920 Compiler for C supports arguments -mavx512vl: YES 00:02:33.920 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:33.920 Compiler for C supports arguments -mavx2: YES 00:02:33.920 Compiler for C supports arguments -mavx: YES 00:02:33.920 Message: lib/net: Defining dependency "net" 00:02:33.920 Message: lib/meter: Defining dependency "meter" 00:02:33.920 Message: lib/ethdev: Defining dependency "ethdev" 00:02:33.920 Message: lib/pci: Defining dependency "pci" 00:02:33.920 Message: lib/cmdline: Defining dependency "cmdline" 00:02:33.920 Message: lib/metrics: Defining dependency "metrics" 00:02:33.920 Message: lib/hash: Defining dependency "hash" 00:02:33.920 Message: lib/timer: Defining dependency "timer" 00:02:33.920 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:33.920 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:33.920 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:33.920 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:33.920 Message: lib/acl: Defining dependency "acl" 00:02:33.920 Message: lib/bbdev: Defining dependency "bbdev" 00:02:33.920 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:33.920 Run-time dependency libelf found: YES 0.191 00:02:33.920 Message: lib/bpf: Defining dependency "bpf" 00:02:33.920 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:33.920 Message: lib/compressdev: Defining dependency "compressdev" 00:02:33.920 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:33.920 Message: lib/distributor: Defining dependency "distributor" 00:02:33.920 Message: lib/dmadev: Defining dependency "dmadev" 00:02:33.920 Message: lib/efd: Defining dependency "efd" 00:02:33.920 Message: lib/eventdev: Defining dependency "eventdev" 00:02:33.920 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:33.920 Message: lib/gpudev: Defining dependency "gpudev" 00:02:33.920 Message: lib/gro: Defining dependency "gro" 00:02:33.920 Message: lib/gso: Defining dependency "gso" 00:02:33.920 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:33.920 Message: lib/jobstats: Defining dependency "jobstats" 00:02:33.920 Message: lib/latencystats: Defining dependency "latencystats" 00:02:33.920 Message: lib/lpm: Defining dependency "lpm" 00:02:33.920 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:33.920 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:33.920 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:33.920 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:33.920 Message: lib/member: Defining dependency "member" 00:02:33.920 Message: lib/pcapng: Defining dependency "pcapng" 00:02:33.920 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:33.920 Message: lib/power: Defining dependency "power" 00:02:33.920 Message: lib/rawdev: Defining dependency "rawdev" 00:02:33.920 Message: lib/regexdev: Defining dependency "regexdev" 00:02:33.920 Message: lib/mldev: Defining dependency "mldev" 00:02:33.920 Message: lib/rib: Defining dependency "rib" 00:02:33.920 Message: lib/reorder: Defining dependency "reorder" 00:02:33.920 Message: lib/sched: Defining dependency "sched" 00:02:33.920 Message: lib/security: Defining dependency "security" 00:02:33.920 Message: lib/stack: Defining dependency "stack" 00:02:33.920 Has header "linux/userfaultfd.h" : YES 00:02:33.920 Has header "linux/vduse.h" : YES 00:02:33.920 Message: lib/vhost: Defining dependency "vhost" 00:02:33.920 Message: lib/ipsec: Defining dependency "ipsec" 00:02:33.920 Message: lib/pdcp: Defining dependency "pdcp" 00:02:33.920 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:33.920 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:33.920 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:33.920 Message: lib/fib: Defining dependency "fib" 00:02:33.920 Message: lib/port: Defining dependency "port" 00:02:33.920 Message: lib/pdump: Defining dependency "pdump" 00:02:33.920 Message: lib/table: Defining dependency "table" 00:02:33.920 Message: lib/pipeline: Defining dependency "pipeline" 00:02:33.920 Message: lib/graph: Defining dependency "graph" 00:02:33.920 Message: lib/node: Defining dependency "node" 00:02:33.920 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:33.920 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:33.920 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.490 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.490 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:34.490 Compiler for C supports arguments -Wno-unused-value: YES 00:02:34.490 Compiler for C supports arguments -Wno-format: YES 00:02:34.490 Compiler for C supports arguments -Wno-format-security: YES 00:02:34.490 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:34.490 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:34.490 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:34.490 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:34.490 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.490 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.490 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.490 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:34.490 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:34.490 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:34.490 Has header "sys/epoll.h" : YES 00:02:34.490 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:34.490 Configuring doxy-api-html.conf using configuration 00:02:34.490 Configuring doxy-api-man.conf using configuration 00:02:34.490 Program mandb found: YES (/usr/bin/mandb) 00:02:34.490 Program sphinx-build found: NO 00:02:34.490 Configuring rte_build_config.h using configuration 00:02:34.490 Message: 00:02:34.490 ================= 00:02:34.490 Applications Enabled 00:02:34.490 ================= 00:02:34.490 00:02:34.490 apps: 00:02:34.490 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:34.490 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:34.490 test-pmd, test-regex, test-sad, test-security-perf, 00:02:34.490 00:02:34.490 Message: 00:02:34.490 ================= 00:02:34.490 Libraries Enabled 00:02:34.490 ================= 00:02:34.490 00:02:34.490 libs: 00:02:34.490 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.490 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:34.490 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:34.490 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:34.490 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:34.490 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:34.490 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:34.490 00:02:34.490 00:02:34.490 Message: 00:02:34.490 =============== 00:02:34.490 Drivers Enabled 00:02:34.490 =============== 00:02:34.490 00:02:34.490 common: 00:02:34.490 00:02:34.490 bus: 00:02:34.490 pci, vdev, 00:02:34.490 mempool: 00:02:34.490 ring, 00:02:34.490 dma: 00:02:34.490 00:02:34.490 net: 00:02:34.490 i40e, 00:02:34.490 raw: 00:02:34.490 00:02:34.490 crypto: 00:02:34.490 00:02:34.490 compress: 00:02:34.490 00:02:34.491 regex: 00:02:34.491 00:02:34.491 ml: 00:02:34.491 00:02:34.491 vdpa: 00:02:34.491 00:02:34.491 event: 00:02:34.491 00:02:34.491 baseband: 00:02:34.491 00:02:34.491 gpu: 00:02:34.491 00:02:34.491 00:02:34.491 Message: 00:02:34.491 ================= 00:02:34.491 Content Skipped 00:02:34.491 ================= 00:02:34.491 00:02:34.491 apps: 00:02:34.491 00:02:34.491 libs: 00:02:34.491 00:02:34.491 drivers: 00:02:34.491 common/cpt: not in enabled drivers build config 00:02:34.491 common/dpaax: not in enabled drivers build config 00:02:34.491 common/iavf: not in enabled drivers build config 00:02:34.491 common/idpf: not in enabled drivers build config 00:02:34.491 common/mvep: not in enabled drivers build config 00:02:34.491 common/octeontx: not in enabled drivers build config 00:02:34.491 bus/auxiliary: not in enabled drivers build config 00:02:34.491 bus/cdx: not in enabled drivers build config 00:02:34.491 bus/dpaa: not in enabled drivers build config 00:02:34.491 bus/fslmc: not in enabled drivers build config 00:02:34.491 bus/ifpga: not in enabled drivers build config 00:02:34.491 bus/platform: not in enabled drivers build config 00:02:34.491 bus/vmbus: not in enabled drivers build config 00:02:34.491 common/cnxk: not in enabled drivers build config 00:02:34.491 common/mlx5: not in enabled drivers build config 00:02:34.491 common/nfp: not in enabled drivers build config 00:02:34.491 common/qat: not in enabled drivers build config 00:02:34.491 common/sfc_efx: not in enabled drivers build config 00:02:34.491 mempool/bucket: not in enabled drivers build config 00:02:34.491 mempool/cnxk: not in enabled drivers build config 00:02:34.491 mempool/dpaa: not in enabled drivers build config 00:02:34.491 mempool/dpaa2: not in enabled drivers build config 00:02:34.491 mempool/octeontx: not in enabled drivers build config 00:02:34.491 mempool/stack: not in enabled drivers build config 00:02:34.491 dma/cnxk: not in enabled drivers build config 00:02:34.491 dma/dpaa: not in enabled drivers build config 00:02:34.491 dma/dpaa2: not in enabled drivers build config 00:02:34.491 dma/hisilicon: not in enabled drivers build config 00:02:34.491 dma/idxd: not in enabled drivers build config 00:02:34.491 dma/ioat: not in enabled drivers build config 00:02:34.491 dma/skeleton: not in enabled drivers build config 00:02:34.491 net/af_packet: not in enabled drivers build config 00:02:34.491 net/af_xdp: not in enabled drivers build config 00:02:34.491 net/ark: not in enabled drivers build config 00:02:34.491 net/atlantic: not in enabled drivers build config 00:02:34.491 net/avp: not in enabled drivers build config 00:02:34.491 net/axgbe: not in enabled drivers build config 00:02:34.491 net/bnx2x: not in enabled drivers build config 00:02:34.491 net/bnxt: not in enabled drivers build config 00:02:34.491 net/bonding: not in enabled drivers build config 00:02:34.491 net/cnxk: not in enabled drivers build config 00:02:34.491 net/cpfl: not in enabled drivers build config 00:02:34.491 net/cxgbe: not in enabled drivers build config 00:02:34.491 net/dpaa: not in enabled drivers build config 00:02:34.491 net/dpaa2: not in enabled drivers build config 00:02:34.491 net/e1000: not in enabled drivers build config 00:02:34.491 net/ena: not in enabled drivers build config 00:02:34.491 net/enetc: not in enabled drivers build config 00:02:34.491 net/enetfec: not in enabled drivers build config 00:02:34.491 net/enic: not in enabled drivers build config 00:02:34.491 net/failsafe: not in enabled drivers build config 00:02:34.491 net/fm10k: not in enabled drivers build config 00:02:34.491 net/gve: not in enabled drivers build config 00:02:34.491 net/hinic: not in enabled drivers build config 00:02:34.491 net/hns3: not in enabled drivers build config 00:02:34.491 net/iavf: not in enabled drivers build config 00:02:34.491 net/ice: not in enabled drivers build config 00:02:34.491 net/idpf: not in enabled drivers build config 00:02:34.491 net/igc: not in enabled drivers build config 00:02:34.491 net/ionic: not in enabled drivers build config 00:02:34.491 net/ipn3ke: not in enabled drivers build config 00:02:34.491 net/ixgbe: not in enabled drivers build config 00:02:34.491 net/mana: not in enabled drivers build config 00:02:34.491 net/memif: not in enabled drivers build config 00:02:34.491 net/mlx4: not in enabled drivers build config 00:02:34.491 net/mlx5: not in enabled drivers build config 00:02:34.491 net/mvneta: not in enabled drivers build config 00:02:34.491 net/mvpp2: not in enabled drivers build config 00:02:34.491 net/netvsc: not in enabled drivers build config 00:02:34.491 net/nfb: not in enabled drivers build config 00:02:34.491 net/nfp: not in enabled drivers build config 00:02:34.491 net/ngbe: not in enabled drivers build config 00:02:34.491 net/null: not in enabled drivers build config 00:02:34.491 net/octeontx: not in enabled drivers build config 00:02:34.491 net/octeon_ep: not in enabled drivers build config 00:02:34.491 net/pcap: not in enabled drivers build config 00:02:34.491 net/pfe: not in enabled drivers build config 00:02:34.491 net/qede: not in enabled drivers build config 00:02:34.491 net/ring: not in enabled drivers build config 00:02:34.491 net/sfc: not in enabled drivers build config 00:02:34.491 net/softnic: not in enabled drivers build config 00:02:34.491 net/tap: not in enabled drivers build config 00:02:34.491 net/thunderx: not in enabled drivers build config 00:02:34.491 net/txgbe: not in enabled drivers build config 00:02:34.491 net/vdev_netvsc: not in enabled drivers build config 00:02:34.491 net/vhost: not in enabled drivers build config 00:02:34.491 net/virtio: not in enabled drivers build config 00:02:34.491 net/vmxnet3: not in enabled drivers build config 00:02:34.491 raw/cnxk_bphy: not in enabled drivers build config 00:02:34.491 raw/cnxk_gpio: not in enabled drivers build config 00:02:34.491 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:34.491 raw/ifpga: not in enabled drivers build config 00:02:34.491 raw/ntb: not in enabled drivers build config 00:02:34.491 raw/skeleton: not in enabled drivers build config 00:02:34.491 crypto/armv8: not in enabled drivers build config 00:02:34.491 crypto/bcmfs: not in enabled drivers build config 00:02:34.491 crypto/caam_jr: not in enabled drivers build config 00:02:34.491 crypto/ccp: not in enabled drivers build config 00:02:34.491 crypto/cnxk: not in enabled drivers build config 00:02:34.491 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.491 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.491 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.491 crypto/mlx5: not in enabled drivers build config 00:02:34.491 crypto/mvsam: not in enabled drivers build config 00:02:34.491 crypto/nitrox: not in enabled drivers build config 00:02:34.491 crypto/null: not in enabled drivers build config 00:02:34.491 crypto/octeontx: not in enabled drivers build config 00:02:34.491 crypto/openssl: not in enabled drivers build config 00:02:34.491 crypto/scheduler: not in enabled drivers build config 00:02:34.491 crypto/uadk: not in enabled drivers build config 00:02:34.491 crypto/virtio: not in enabled drivers build config 00:02:34.491 compress/isal: not in enabled drivers build config 00:02:34.491 compress/mlx5: not in enabled drivers build config 00:02:34.491 compress/octeontx: not in enabled drivers build config 00:02:34.491 compress/zlib: not in enabled drivers build config 00:02:34.491 regex/mlx5: not in enabled drivers build config 00:02:34.491 regex/cn9k: not in enabled drivers build config 00:02:34.491 ml/cnxk: not in enabled drivers build config 00:02:34.491 vdpa/ifc: not in enabled drivers build config 00:02:34.491 vdpa/mlx5: not in enabled drivers build config 00:02:34.491 vdpa/nfp: not in enabled drivers build config 00:02:34.491 vdpa/sfc: not in enabled drivers build config 00:02:34.491 event/cnxk: not in enabled drivers build config 00:02:34.491 event/dlb2: not in enabled drivers build config 00:02:34.491 event/dpaa: not in enabled drivers build config 00:02:34.491 event/dpaa2: not in enabled drivers build config 00:02:34.491 event/dsw: not in enabled drivers build config 00:02:34.491 event/opdl: not in enabled drivers build config 00:02:34.491 event/skeleton: not in enabled drivers build config 00:02:34.491 event/sw: not in enabled drivers build config 00:02:34.491 event/octeontx: not in enabled drivers build config 00:02:34.491 baseband/acc: not in enabled drivers build config 00:02:34.491 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:34.491 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:34.491 baseband/la12xx: not in enabled drivers build config 00:02:34.491 baseband/null: not in enabled drivers build config 00:02:34.491 baseband/turbo_sw: not in enabled drivers build config 00:02:34.491 gpu/cuda: not in enabled drivers build config 00:02:34.491 00:02:34.491 00:02:34.491 Build targets in project: 217 00:02:34.491 00:02:34.491 DPDK 23.11.0 00:02:34.491 00:02:34.491 User defined options 00:02:34.491 libdir : lib 00:02:34.491 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:34.491 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:34.491 c_link_args : 00:02:34.491 enable_docs : false 00:02:34.491 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:34.491 enable_kmods : false 00:02:34.491 machine : native 00:02:34.491 tests : false 00:02:34.491 00:02:34.491 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.491 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:34.751 22:57:53 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:34.751 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:34.751 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.751 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.751 [3/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.751 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.751 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.751 [6/707] Linking static target lib/librte_kvargs.a 00:02:34.751 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:35.010 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:35.010 [9/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:35.010 [10/707] Linking static target lib/librte_log.a 00:02:35.010 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.010 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.010 [13/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.010 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.273 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.273 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.273 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.273 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.273 [19/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.273 [20/707] Linking target lib/librte_log.so.24.0 00:02:35.532 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.532 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.532 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.532 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.532 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.532 [26/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:35.532 [27/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.793 [28/707] Linking static target lib/librte_telemetry.a 00:02:35.793 [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:35.793 [30/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:35.793 [31/707] Linking target lib/librte_kvargs.so.24.0 00:02:35.793 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:35.793 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:35.793 [34/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:35.793 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:35.793 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.793 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:35.793 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:35.793 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.054 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.054 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.054 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.054 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:36.054 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.054 [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:36.054 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:36.314 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:36.314 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:36.314 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:36.314 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:36.314 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:36.314 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:36.314 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:36.314 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:36.574 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:36.575 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:36.575 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:36.575 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:36.575 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:36.575 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:36.575 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:36.575 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:36.575 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:36.575 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:36.835 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:36.835 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:36.835 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:36.835 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:36.835 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:36.835 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:37.095 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:37.095 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:37.095 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.095 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:37.095 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:37.095 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:37.095 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:37.095 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:37.359 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:37.359 [80/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:37.359 [81/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:37.359 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:37.359 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:37.359 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:37.359 [85/707] Linking static target lib/librte_ring.a 00:02:37.669 [86/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:37.669 [87/707] Linking static target lib/librte_eal.a 00:02:37.669 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:37.669 [89/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:37.669 [90/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.669 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:37.669 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:37.669 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:37.669 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:37.669 [95/707] Linking static target lib/librte_mempool.a 00:02:37.945 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:37.945 [97/707] Linking static target lib/librte_rcu.a 00:02:37.945 [98/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:37.945 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:37.945 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:37.945 [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:37.945 [102/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.206 [103/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:38.206 [104/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:38.206 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.206 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.206 [107/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:38.206 [108/707] Linking static target lib/librte_mbuf.a 00:02:38.466 [109/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.466 [110/707] Linking static target lib/librte_net.a 00:02:38.466 [111/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:38.466 [112/707] Linking static target lib/librte_meter.a 00:02:38.466 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:38.466 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:38.466 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:38.466 [116/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.466 [117/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.726 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:38.726 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.987 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.987 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:39.247 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:39.247 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:39.247 [124/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:39.247 [125/707] Linking static target lib/librte_pci.a 00:02:39.247 [126/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:39.247 [127/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:39.507 [128/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:39.507 [129/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:39.507 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.508 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:39.508 [132/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.508 [133/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:39.508 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:39.508 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.508 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.508 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:39.508 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:39.508 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.508 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.768 [141/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.768 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.768 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.768 [144/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.768 [145/707] Linking static target lib/librte_cmdline.a 00:02:40.028 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:40.028 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:40.028 [148/707] Linking static target lib/librte_metrics.a 00:02:40.028 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:40.028 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:40.287 [151/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.287 [152/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.287 [153/707] Linking static target lib/librte_timer.a 00:02:40.287 [154/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:40.547 [155/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.547 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:40.547 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.807 [158/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:40.807 [159/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:41.068 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:41.068 [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:41.068 [162/707] Linking static target lib/librte_bitratestats.a 00:02:41.328 [163/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.328 [164/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:41.328 [165/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:41.328 [166/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:41.328 [167/707] Linking static target lib/librte_bbdev.a 00:02:41.588 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:41.588 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:41.847 [170/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:41.847 [171/707] Linking static target lib/librte_hash.a 00:02:41.847 [172/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:41.847 [173/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.847 [174/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:41.847 [175/707] Linking static target lib/acl/libavx2_tmp.a 00:02:42.108 [176/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:42.108 [177/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:42.108 [178/707] Linking static target lib/librte_ethdev.a 00:02:42.108 [179/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.108 [180/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:42.108 [181/707] Linking target lib/librte_eal.so.24.0 00:02:42.367 [182/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.367 [183/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:42.367 [184/707] Linking target lib/librte_ring.so.24.0 00:02:42.367 [185/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:42.367 [186/707] Linking target lib/librte_meter.so.24.0 00:02:42.367 [187/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:42.367 [188/707] Linking target lib/librte_pci.so.24.0 00:02:42.367 [189/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:42.367 [190/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:42.367 [191/707] Linking target lib/librte_timer.so.24.0 00:02:42.367 [192/707] Linking target lib/librte_rcu.so.24.0 00:02:42.367 [193/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:42.367 [194/707] Linking target lib/librte_mempool.so.24.0 00:02:42.367 [195/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:42.367 [196/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:42.367 [197/707] Linking static target lib/librte_cfgfile.a 00:02:42.627 [198/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:42.627 [199/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:42.627 [200/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:42.627 [201/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:42.627 [202/707] Linking target lib/librte_mbuf.so.24.0 00:02:42.627 [203/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:42.888 [204/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:42.888 [205/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:42.888 [206/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.888 [207/707] Linking static target lib/librte_bpf.a 00:02:42.888 [208/707] Linking target lib/librte_bbdev.so.24.0 00:02:42.888 [209/707] Linking target lib/librte_net.so.24.0 00:02:42.888 [210/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:42.888 [211/707] Linking target lib/librte_cfgfile.so.24.0 00:02:42.888 [212/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:42.888 [213/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:42.888 [214/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:42.888 [215/707] Linking target lib/librte_cmdline.so.24.0 00:02:42.888 [216/707] Linking target lib/librte_hash.so.24.0 00:02:42.888 [217/707] Linking static target lib/librte_acl.a 00:02:42.888 [218/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:43.148 [219/707] Linking static target lib/librte_compressdev.a 00:02:43.148 [220/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.148 [221/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:43.148 [222/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:43.148 [223/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.148 [224/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:43.148 [225/707] Linking target lib/librte_acl.so.24.0 00:02:43.409 [226/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:43.409 [227/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:43.409 [228/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:43.409 [229/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.409 [230/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:43.409 [231/707] Linking target lib/librte_compressdev.so.24.0 00:02:43.409 [232/707] Linking static target lib/librte_distributor.a 00:02:43.669 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:43.669 [234/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:43.669 [235/707] Linking static target lib/librte_dmadev.a 00:02:43.669 [236/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.669 [237/707] Linking target lib/librte_distributor.so.24.0 00:02:43.928 [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:43.928 [239/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.928 [240/707] Linking target lib/librte_dmadev.so.24.0 00:02:44.188 [241/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:44.188 [242/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:44.188 [243/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:44.189 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:44.189 [245/707] Linking static target lib/librte_efd.a 00:02:44.449 [246/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.449 [247/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:44.449 [248/707] Linking target lib/librte_efd.so.24.0 00:02:44.449 [249/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:44.449 [250/707] Linking static target lib/librte_cryptodev.a 00:02:44.709 [251/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:44.709 [252/707] Linking static target lib/librte_dispatcher.a 00:02:44.709 [253/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:44.709 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:44.709 [255/707] Linking static target lib/librte_gpudev.a 00:02:44.969 [256/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:44.969 [257/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:44.969 [258/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.969 [259/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:45.229 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:45.489 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:45.489 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:45.489 [263/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.489 [264/707] Linking target lib/librte_gpudev.so.24.0 00:02:45.489 [265/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:45.489 [266/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.489 [267/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:45.489 [268/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:45.489 [269/707] Linking static target lib/librte_gro.a 00:02:45.489 [270/707] Linking target lib/librte_cryptodev.so.24.0 00:02:45.489 [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:45.749 [272/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:45.749 [273/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.749 [274/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:45.749 [275/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:45.749 [276/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:45.749 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:45.749 [278/707] Linking static target lib/librte_eventdev.a 00:02:46.008 [279/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:46.008 [280/707] Linking static target lib/librte_gso.a 00:02:46.008 [281/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.008 [282/707] Linking target lib/librte_ethdev.so.24.0 00:02:46.008 [283/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.008 [284/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:46.008 [285/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:46.008 [286/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:46.268 [287/707] Linking target lib/librte_metrics.so.24.0 00:02:46.268 [288/707] Linking target lib/librte_bpf.so.24.0 00:02:46.268 [289/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:46.268 [290/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:46.268 [291/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:46.268 [292/707] Linking static target lib/librte_jobstats.a 00:02:46.268 [293/707] Linking target lib/librte_gro.so.24.0 00:02:46.268 [294/707] Linking target lib/librte_gso.so.24.0 00:02:46.268 [295/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:46.268 [296/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:46.268 [297/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:46.268 [298/707] Linking target lib/librte_bitratestats.so.24.0 00:02:46.268 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:46.268 [300/707] Linking static target lib/librte_ip_frag.a 00:02:46.528 [301/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.528 [302/707] Linking target lib/librte_jobstats.so.24.0 00:02:46.528 [303/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:46.528 [304/707] Linking static target lib/librte_latencystats.a 00:02:46.528 [305/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.528 [306/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:46.528 [307/707] Linking target lib/librte_ip_frag.so.24.0 00:02:46.788 [308/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:46.788 [309/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.788 [310/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:46.788 [311/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:46.788 [312/707] Linking target lib/librte_latencystats.so.24.0 00:02:46.788 [313/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:46.788 [314/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:46.788 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:46.788 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:47.047 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:47.048 [318/707] Linking static target lib/librte_lpm.a 00:02:47.048 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:47.048 [320/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:47.307 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:47.307 [322/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:47.307 [323/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:47.307 [324/707] Linking static target lib/librte_pcapng.a 00:02:47.307 [325/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:47.307 [326/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:47.307 [327/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.307 [328/707] Linking target lib/librte_lpm.so.24.0 00:02:47.566 [329/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.566 [330/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.566 [331/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:47.566 [332/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:47.566 [333/707] Linking target lib/librte_pcapng.so.24.0 00:02:47.567 [334/707] Linking target lib/librte_eventdev.so.24.0 00:02:47.567 [335/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:47.567 [336/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:47.567 [337/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:47.567 [338/707] Linking target lib/librte_dispatcher.so.24.0 00:02:47.826 [339/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:47.826 [340/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:47.826 [341/707] Linking static target lib/librte_power.a 00:02:47.826 [342/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:47.826 [343/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:47.826 [344/707] Linking static target lib/librte_regexdev.a 00:02:47.826 [345/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:47.826 [346/707] Linking static target lib/librte_rawdev.a 00:02:47.826 [347/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:48.086 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:48.086 [349/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:48.086 [350/707] Linking static target lib/librte_member.a 00:02:48.086 [351/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:48.086 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:48.086 [353/707] Linking static target lib/librte_mldev.a 00:02:48.086 [354/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.345 [355/707] Linking target lib/librte_rawdev.so.24.0 00:02:48.345 [356/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.345 [357/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.345 [358/707] Linking target lib/librte_member.so.24.0 00:02:48.345 [359/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:48.345 [360/707] Linking target lib/librte_power.so.24.0 00:02:48.345 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:48.345 [362/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:48.345 [363/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.346 [364/707] Linking target lib/librte_regexdev.so.24.0 00:02:48.346 [365/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:48.346 [366/707] Linking static target lib/librte_reorder.a 00:02:48.605 [367/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:48.605 [368/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:48.605 [369/707] Linking static target lib/librte_rib.a 00:02:48.605 [370/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:48.605 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:48.605 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:48.605 [373/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.605 [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:48.605 [375/707] Linking static target lib/librte_stack.a 00:02:48.605 [376/707] Linking target lib/librte_reorder.so.24.0 00:02:48.865 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:48.865 [378/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:48.865 [379/707] Linking static target lib/librte_security.a 00:02:48.865 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.865 [381/707] Linking target lib/librte_stack.so.24.0 00:02:48.865 [382/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.865 [383/707] Linking target lib/librte_rib.so.24.0 00:02:49.135 [384/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:49.135 [385/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:49.135 [386/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.135 [387/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:49.135 [388/707] Linking target lib/librte_mldev.so.24.0 00:02:49.135 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.135 [390/707] Linking target lib/librte_security.so.24.0 00:02:49.135 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:49.411 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:49.411 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:49.411 [394/707] Linking static target lib/librte_sched.a 00:02:49.670 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:49.670 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:49.670 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.670 [398/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:49.930 [399/707] Linking target lib/librte_sched.so.24.0 00:02:49.930 [400/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:49.930 [401/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:49.930 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:50.189 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:50.189 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:50.189 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:50.189 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:50.189 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:50.448 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:50.448 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:50.707 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:50.707 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:50.707 [412/707] Linking static target lib/librte_ipsec.a 00:02:50.708 [413/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:50.708 [414/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:50.708 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:50.967 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.967 [417/707] Linking target lib/librte_ipsec.so.24.0 00:02:50.967 [418/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:50.967 [419/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:50.967 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:51.227 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:51.227 [422/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:51.487 [423/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:51.487 [424/707] Linking static target lib/librte_fib.a 00:02:51.487 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:51.487 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:51.487 [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:51.747 [428/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.747 [429/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:51.747 [430/707] Linking target lib/librte_fib.so.24.0 00:02:51.747 [431/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:51.747 [432/707] Linking static target lib/librte_pdcp.a 00:02:52.007 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.007 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:52.007 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:52.268 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:52.268 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:52.268 [438/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:52.268 [439/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:52.268 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:52.528 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:52.528 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:52.787 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:52.787 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:52.787 [445/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:52.787 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:52.787 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:52.787 [448/707] Linking static target lib/librte_port.a 00:02:53.047 [449/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:53.047 [450/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:53.047 [451/707] Linking static target lib/librte_pdump.a 00:02:53.047 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:53.047 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:53.047 [454/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.047 [455/707] Linking target lib/librte_port.so.24.0 00:02:53.306 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.306 [457/707] Linking target lib/librte_pdump.so.24.0 00:02:53.306 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:53.566 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:53.566 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:53.566 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:53.566 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:53.566 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:53.566 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:53.826 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:53.826 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:53.826 [467/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:53.826 [468/707] Linking static target lib/librte_table.a 00:02:54.086 [469/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:54.086 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:54.345 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:54.605 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.605 [473/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:54.605 [474/707] Linking target lib/librte_table.so.24.0 00:02:54.605 [475/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:54.605 [476/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:54.865 [477/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:54.865 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:55.125 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:55.125 [480/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:55.125 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:55.125 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:55.125 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:55.385 [484/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:55.385 [485/707] Linking static target lib/librte_graph.a 00:02:55.385 [486/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:55.385 [487/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:55.645 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:55.645 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:55.645 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:55.905 [491/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.905 [492/707] Linking target lib/librte_graph.so.24.0 00:02:55.905 [493/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:56.164 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:56.164 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:56.164 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:56.164 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:56.424 [498/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:56.424 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:56.424 [500/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:56.424 [501/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:56.424 [502/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:56.684 [503/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:56.684 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:56.684 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:56.944 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:56.944 [507/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:56.944 [508/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.944 [509/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:56.944 [510/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:56.944 [511/707] Linking static target lib/librte_node.a 00:02:56.944 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:57.211 [513/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:57.211 [514/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:57.211 [515/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:57.211 [516/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:57.211 [517/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.211 [518/707] Linking target lib/librte_node.so.24.0 00:02:57.491 [519/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.491 [520/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.491 [521/707] Linking static target drivers/librte_bus_vdev.a 00:02:57.491 [522/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.491 [523/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.491 [524/707] Linking static target drivers/librte_bus_pci.a 00:02:57.491 [525/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.491 [526/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.491 [527/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:57.491 [528/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.491 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:57.751 [530/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:57.751 [531/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:57.751 [532/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:57.751 [533/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.751 [534/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.751 [535/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.751 [536/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:58.010 [537/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:58.010 [538/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.010 [539/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.010 [540/707] Linking static target drivers/librte_mempool_ring.a 00:02:58.010 [541/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.010 [542/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:58.010 [543/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:58.270 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:58.530 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:58.789 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:58.789 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:59.049 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:59.309 [549/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:59.309 [550/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:59.309 [551/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:59.569 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:59.569 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:59.569 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:59.569 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:59.828 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:59.828 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:00.088 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:00.088 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:00.088 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:00.347 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:00.347 [562/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:00.608 [563/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:00.608 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:00.867 [565/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:00.867 [566/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:00.867 [567/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:00.867 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:01.126 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:01.127 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:01.127 [571/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:01.127 [572/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:01.127 [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:01.386 [574/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:01.386 [575/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:01.386 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:01.646 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:01.646 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:01.646 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:01.905 [580/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:01.905 [581/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:01.905 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:01.905 [583/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:02.165 [584/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:02.165 [585/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:02.165 [586/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:02.165 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:02.165 [588/707] Linking static target drivers/librte_net_i40e.a 00:03:02.165 [589/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:02.165 [590/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:02.424 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:02.684 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:02.684 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:02.684 [594/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.684 [595/707] Linking target drivers/librte_net_i40e.so.24.0 00:03:02.684 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:02.943 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:02.943 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:02.943 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:03.203 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:03.203 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:03.471 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:03.471 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:03.471 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:03.471 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:03.471 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:03.730 [607/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:03.730 [608/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:03.730 [609/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:03.730 [610/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:03.990 [611/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:03.990 [612/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:03.990 [613/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:03.990 [614/707] Linking static target lib/librte_vhost.a 00:03:03.990 [615/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:04.250 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:04.250 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:04.250 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:04.820 [619/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.079 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:05.079 [621/707] Linking target lib/librte_vhost.so.24.0 00:03:05.079 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:05.079 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:05.079 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:05.079 [625/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:05.338 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:05.338 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:05.338 [628/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:05.338 [629/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:05.598 [630/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:05.598 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:05.598 [632/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:05.598 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:05.598 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:05.858 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:05.858 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:05.858 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:05.858 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:05.858 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:06.118 [640/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:06.118 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:06.118 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:06.378 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:06.378 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:06.378 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:06.378 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:06.637 [647/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:06.637 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:06.637 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:06.637 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:06.637 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:06.901 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:06.901 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:07.165 [654/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:07.165 [655/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:07.165 [656/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:07.423 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:07.423 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:07.423 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:07.682 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:07.682 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:07.682 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:07.941 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:07.941 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:07.941 [665/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:08.201 [666/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:08.201 [667/707] Linking static target lib/librte_pipeline.a 00:03:08.201 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:08.201 [669/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:08.462 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:08.462 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:08.462 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:08.462 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:08.722 [674/707] Linking target app/dpdk-dumpcap 00:03:08.722 [675/707] Linking target app/dpdk-graph 00:03:08.722 [676/707] Linking target app/dpdk-pdump 00:03:08.722 [677/707] Linking target app/dpdk-proc-info 00:03:08.722 [678/707] Linking target app/dpdk-test-acl 00:03:08.982 [679/707] Linking target app/dpdk-test-bbdev 00:03:08.982 [680/707] Linking target app/dpdk-test-cmdline 00:03:08.982 [681/707] Linking target app/dpdk-test-compress-perf 00:03:09.242 [682/707] Linking target app/dpdk-test-crypto-perf 00:03:09.242 [683/707] Linking target app/dpdk-test-dma-perf 00:03:09.242 [684/707] Linking target app/dpdk-test-eventdev 00:03:09.242 [685/707] Linking target app/dpdk-test-fib 00:03:09.242 [686/707] Linking target app/dpdk-test-flow-perf 00:03:09.242 [687/707] Linking target app/dpdk-test-gpudev 00:03:09.503 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:09.503 [689/707] Linking target app/dpdk-test-pipeline 00:03:09.503 [690/707] Linking target app/dpdk-test-mldev 00:03:09.763 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:09.763 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:10.032 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:10.032 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:10.032 [695/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:10.032 [696/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:10.292 [697/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:10.292 [698/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:10.552 [699/707] Linking target app/dpdk-test-sad 00:03:10.552 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:10.552 [701/707] Linking target app/dpdk-test-regex 00:03:10.812 [702/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:10.812 [703/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:11.073 [704/707] Linking target app/dpdk-test-security-perf 00:03:11.073 [705/707] Linking target app/dpdk-testpmd 00:03:11.333 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.333 [707/707] Linking target lib/librte_pipeline.so.24.0 00:03:11.333 22:58:30 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:11.333 22:58:30 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:11.333 22:58:30 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:11.333 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:11.333 [0/1] Installing files. 00:03:11.595 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.598 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:11.599 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.600 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.600 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.600 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.861 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.862 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.125 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.125 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.125 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.125 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.125 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.125 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.125 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.125 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.125 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.125 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.125 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.126 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.127 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:12.128 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:12.128 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:12.128 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:12.128 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:12.128 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:12.128 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:12.128 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:12.128 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:12.128 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:12.128 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:12.128 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:12.128 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:12.128 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:12.128 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:12.128 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:12.128 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:12.128 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:12.128 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:12.128 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:12.128 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:12.128 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:12.128 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:12.128 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:12.128 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:12.128 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:12.128 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:12.128 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:12.128 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:12.128 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:12.128 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:12.128 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:12.128 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:12.128 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:12.128 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:12.128 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:12.128 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:12.128 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:12.128 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:12.128 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:12.128 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:12.128 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:12.128 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:12.129 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:12.129 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:12.129 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:12.129 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:12.129 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:12.129 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:12.129 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:12.129 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:12.129 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:12.129 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:12.129 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:12.129 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:12.129 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:12.129 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:12.129 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:12.129 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:12.129 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:12.129 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:12.129 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:12.129 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:12.129 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:12.129 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:12.129 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:12.129 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:12.129 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:12.129 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:12.129 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:12.129 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:12.129 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:12.129 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:12.129 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:12.129 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:12.129 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:12.129 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:12.129 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:12.129 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:12.129 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:12.129 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:12.129 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:12.129 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:12.129 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:12.129 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:12.129 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:12.129 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:12.129 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:12.129 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:12.129 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:12.129 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:12.129 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:12.129 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:12.129 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:12.129 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:12.129 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:12.129 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:12.129 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:12.129 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:12.129 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:12.129 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:12.129 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:12.129 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:12.129 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:12.129 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:12.129 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:12.129 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:12.129 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:12.129 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:12.129 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:12.129 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:12.129 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:12.129 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:12.129 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:12.129 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:12.129 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:12.129 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:12.129 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:12.129 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:12.129 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:12.129 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:12.129 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:12.129 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:12.129 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:12.129 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:12.129 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:12.129 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:12.129 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:12.129 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:12.129 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:12.129 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:12.129 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:12.129 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:12.130 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:12.130 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:12.130 22:58:31 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:12.130 22:58:31 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:12.130 00:03:12.130 real 0m44.966s 00:03:12.130 user 5m5.843s 00:03:12.130 sys 0m50.745s 00:03:12.130 22:58:31 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:12.130 22:58:31 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:12.130 ************************************ 00:03:12.130 END TEST build_native_dpdk 00:03:12.130 ************************************ 00:03:12.130 22:58:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:12.130 22:58:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:12.130 22:58:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:12.130 22:58:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:12.130 22:58:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:12.130 22:58:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:12.130 22:58:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:12.130 22:58:31 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:12.390 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:12.650 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.650 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:12.650 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:12.910 Using 'verbs' RDMA provider 00:03:29.189 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:44.089 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:44.660 Creating mk/config.mk...done. 00:03:44.660 Creating mk/cc.flags.mk...done. 00:03:44.660 Type 'make' to build. 00:03:44.660 22:59:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:44.660 22:59:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:44.660 22:59:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:44.660 22:59:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:44.660 ************************************ 00:03:44.660 START TEST make 00:03:44.660 ************************************ 00:03:44.660 22:59:03 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:45.231 make[1]: Nothing to be done for 'all'. 00:04:31.927 CC lib/log/log.o 00:04:31.927 CC lib/ut/ut.o 00:04:31.927 CC lib/log/log_flags.o 00:04:31.927 CC lib/log/log_deprecated.o 00:04:31.928 CC lib/ut_mock/mock.o 00:04:31.928 LIB libspdk_ut.a 00:04:31.928 LIB libspdk_log.a 00:04:31.928 LIB libspdk_ut_mock.a 00:04:31.928 SO libspdk_ut_mock.so.6.0 00:04:31.928 SO libspdk_ut.so.2.0 00:04:31.928 SO libspdk_log.so.7.0 00:04:31.928 SYMLINK libspdk_ut_mock.so 00:04:31.928 SYMLINK libspdk_log.so 00:04:31.928 SYMLINK libspdk_ut.so 00:04:31.928 CC lib/dma/dma.o 00:04:31.928 CC lib/util/crc16.o 00:04:31.928 CC lib/util/base64.o 00:04:31.928 CC lib/util/bit_array.o 00:04:31.928 CC lib/util/cpuset.o 00:04:31.928 CC lib/util/crc32c.o 00:04:31.928 CC lib/util/crc32.o 00:04:31.928 CXX lib/trace_parser/trace.o 00:04:31.928 CC lib/ioat/ioat.o 00:04:31.928 CC lib/vfio_user/host/vfio_user_pci.o 00:04:31.928 CC lib/util/crc32_ieee.o 00:04:31.928 CC lib/util/crc64.o 00:04:31.928 CC lib/util/dif.o 00:04:31.928 CC lib/util/fd.o 00:04:31.928 LIB libspdk_dma.a 00:04:31.928 SO libspdk_dma.so.5.0 00:04:31.928 CC lib/util/fd_group.o 00:04:31.928 CC lib/vfio_user/host/vfio_user.o 00:04:31.928 CC lib/util/file.o 00:04:31.928 SYMLINK libspdk_dma.so 00:04:31.928 CC lib/util/hexlify.o 00:04:31.928 CC lib/util/iov.o 00:04:31.928 CC lib/util/math.o 00:04:31.928 LIB libspdk_ioat.a 00:04:31.928 SO libspdk_ioat.so.7.0 00:04:31.928 CC lib/util/net.o 00:04:31.928 CC lib/util/pipe.o 00:04:31.928 SYMLINK libspdk_ioat.so 00:04:31.928 CC lib/util/strerror_tls.o 00:04:31.928 CC lib/util/string.o 00:04:31.928 LIB libspdk_vfio_user.a 00:04:31.928 CC lib/util/uuid.o 00:04:31.928 CC lib/util/xor.o 00:04:31.928 CC lib/util/zipf.o 00:04:31.928 SO libspdk_vfio_user.so.5.0 00:04:31.928 CC lib/util/md5.o 00:04:31.928 SYMLINK libspdk_vfio_user.so 00:04:31.928 LIB libspdk_util.a 00:04:31.928 SO libspdk_util.so.10.0 00:04:31.928 LIB libspdk_trace_parser.a 00:04:31.928 SO libspdk_trace_parser.so.6.0 00:04:31.928 SYMLINK libspdk_util.so 00:04:31.928 SYMLINK libspdk_trace_parser.so 00:04:31.928 CC lib/conf/conf.o 00:04:31.928 CC lib/json/json_parse.o 00:04:31.928 CC lib/json/json_util.o 00:04:31.928 CC lib/json/json_write.o 00:04:31.928 CC lib/env_dpdk/env.o 00:04:31.928 CC lib/env_dpdk/memory.o 00:04:31.928 CC lib/rdma_utils/rdma_utils.o 00:04:31.928 CC lib/rdma_provider/common.o 00:04:31.928 CC lib/idxd/idxd.o 00:04:31.928 CC lib/vmd/vmd.o 00:04:31.928 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:31.928 LIB libspdk_conf.a 00:04:31.928 CC lib/idxd/idxd_user.o 00:04:31.928 CC lib/idxd/idxd_kernel.o 00:04:31.928 SO libspdk_conf.so.6.0 00:04:31.928 LIB libspdk_rdma_utils.a 00:04:31.928 LIB libspdk_json.a 00:04:31.928 SO libspdk_rdma_utils.so.1.0 00:04:31.928 SYMLINK libspdk_conf.so 00:04:31.928 CC lib/env_dpdk/pci.o 00:04:31.928 SO libspdk_json.so.6.0 00:04:31.928 SYMLINK libspdk_rdma_utils.so 00:04:31.928 CC lib/vmd/led.o 00:04:31.928 LIB libspdk_rdma_provider.a 00:04:31.928 SYMLINK libspdk_json.so 00:04:31.928 CC lib/env_dpdk/init.o 00:04:31.928 CC lib/env_dpdk/threads.o 00:04:31.928 SO libspdk_rdma_provider.so.6.0 00:04:31.928 SYMLINK libspdk_rdma_provider.so 00:04:31.928 CC lib/env_dpdk/pci_ioat.o 00:04:31.928 CC lib/env_dpdk/pci_virtio.o 00:04:31.928 CC lib/env_dpdk/pci_vmd.o 00:04:31.928 CC lib/env_dpdk/pci_idxd.o 00:04:31.928 CC lib/jsonrpc/jsonrpc_server.o 00:04:31.928 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:31.928 CC lib/env_dpdk/pci_event.o 00:04:31.928 CC lib/env_dpdk/sigbus_handler.o 00:04:31.928 CC lib/env_dpdk/pci_dpdk.o 00:04:31.928 LIB libspdk_idxd.a 00:04:31.928 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:31.928 SO libspdk_idxd.so.12.1 00:04:31.928 LIB libspdk_vmd.a 00:04:31.928 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:31.928 SO libspdk_vmd.so.6.0 00:04:31.928 SYMLINK libspdk_idxd.so 00:04:31.928 CC lib/jsonrpc/jsonrpc_client.o 00:04:31.928 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:31.928 SYMLINK libspdk_vmd.so 00:04:31.928 LIB libspdk_jsonrpc.a 00:04:31.928 SO libspdk_jsonrpc.so.6.0 00:04:31.928 SYMLINK libspdk_jsonrpc.so 00:04:31.928 LIB libspdk_env_dpdk.a 00:04:31.928 CC lib/rpc/rpc.o 00:04:31.928 SO libspdk_env_dpdk.so.15.0 00:04:31.928 LIB libspdk_rpc.a 00:04:31.928 SYMLINK libspdk_env_dpdk.so 00:04:31.928 SO libspdk_rpc.so.6.0 00:04:31.928 SYMLINK libspdk_rpc.so 00:04:31.928 CC lib/keyring/keyring_rpc.o 00:04:31.928 CC lib/keyring/keyring.o 00:04:31.928 CC lib/notify/notify.o 00:04:31.928 CC lib/trace/trace_rpc.o 00:04:31.928 CC lib/trace/trace.o 00:04:31.928 CC lib/trace/trace_flags.o 00:04:31.928 CC lib/notify/notify_rpc.o 00:04:31.928 LIB libspdk_notify.a 00:04:31.928 SO libspdk_notify.so.6.0 00:04:31.928 LIB libspdk_trace.a 00:04:31.928 LIB libspdk_keyring.a 00:04:31.928 SYMLINK libspdk_notify.so 00:04:31.928 SO libspdk_trace.so.11.0 00:04:31.928 SO libspdk_keyring.so.2.0 00:04:31.928 SYMLINK libspdk_trace.so 00:04:31.928 SYMLINK libspdk_keyring.so 00:04:31.928 CC lib/thread/thread.o 00:04:31.928 CC lib/sock/sock.o 00:04:31.928 CC lib/thread/iobuf.o 00:04:31.928 CC lib/sock/sock_rpc.o 00:04:31.928 LIB libspdk_sock.a 00:04:31.928 SO libspdk_sock.so.10.0 00:04:31.928 SYMLINK libspdk_sock.so 00:04:31.928 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:31.928 CC lib/nvme/nvme_ctrlr.o 00:04:31.928 CC lib/nvme/nvme_fabric.o 00:04:31.928 CC lib/nvme/nvme_ns_cmd.o 00:04:31.928 CC lib/nvme/nvme_pcie_common.o 00:04:31.928 CC lib/nvme/nvme_ns.o 00:04:31.928 CC lib/nvme/nvme_qpair.o 00:04:31.928 CC lib/nvme/nvme_pcie.o 00:04:31.928 CC lib/nvme/nvme.o 00:04:31.928 LIB libspdk_thread.a 00:04:31.928 SO libspdk_thread.so.10.1 00:04:31.928 CC lib/nvme/nvme_quirks.o 00:04:31.928 CC lib/nvme/nvme_transport.o 00:04:31.928 CC lib/nvme/nvme_discovery.o 00:04:31.928 SYMLINK libspdk_thread.so 00:04:31.928 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:31.928 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:31.928 CC lib/nvme/nvme_tcp.o 00:04:31.928 CC lib/nvme/nvme_opal.o 00:04:31.928 CC lib/nvme/nvme_io_msg.o 00:04:31.928 CC lib/nvme/nvme_poll_group.o 00:04:31.928 CC lib/nvme/nvme_zns.o 00:04:31.928 CC lib/nvme/nvme_stubs.o 00:04:31.928 CC lib/nvme/nvme_auth.o 00:04:31.928 CC lib/accel/accel.o 00:04:31.928 CC lib/nvme/nvme_cuse.o 00:04:31.928 CC lib/blob/blobstore.o 00:04:31.928 CC lib/init/json_config.o 00:04:31.928 CC lib/init/subsystem.o 00:04:31.928 CC lib/blob/request.o 00:04:31.928 CC lib/blob/zeroes.o 00:04:31.928 CC lib/init/subsystem_rpc.o 00:04:31.928 CC lib/blob/blob_bs_dev.o 00:04:31.928 CC lib/init/rpc.o 00:04:31.928 CC lib/accel/accel_rpc.o 00:04:31.928 CC lib/virtio/virtio.o 00:04:31.928 CC lib/accel/accel_sw.o 00:04:32.188 CC lib/virtio/virtio_vhost_user.o 00:04:32.188 LIB libspdk_init.a 00:04:32.188 SO libspdk_init.so.6.0 00:04:32.188 CC lib/virtio/virtio_vfio_user.o 00:04:32.188 SYMLINK libspdk_init.so 00:04:32.188 CC lib/nvme/nvme_rdma.o 00:04:32.448 CC lib/virtio/virtio_pci.o 00:04:32.448 CC lib/fsdev/fsdev.o 00:04:32.448 CC lib/fsdev/fsdev_io.o 00:04:32.448 CC lib/event/app.o 00:04:32.448 CC lib/fsdev/fsdev_rpc.o 00:04:32.448 CC lib/event/reactor.o 00:04:32.448 CC lib/event/log_rpc.o 00:04:32.448 LIB libspdk_accel.a 00:04:32.448 CC lib/event/app_rpc.o 00:04:32.448 SO libspdk_accel.so.16.0 00:04:32.448 CC lib/event/scheduler_static.o 00:04:32.707 SYMLINK libspdk_accel.so 00:04:32.707 LIB libspdk_virtio.a 00:04:32.707 SO libspdk_virtio.so.7.0 00:04:32.707 SYMLINK libspdk_virtio.so 00:04:32.707 CC lib/bdev/bdev_rpc.o 00:04:32.707 CC lib/bdev/bdev.o 00:04:32.707 CC lib/bdev/part.o 00:04:32.707 CC lib/bdev/scsi_nvme.o 00:04:32.707 CC lib/bdev/bdev_zone.o 00:04:32.967 LIB libspdk_event.a 00:04:32.967 SO libspdk_event.so.14.0 00:04:32.967 SYMLINK libspdk_event.so 00:04:32.967 LIB libspdk_fsdev.a 00:04:32.967 SO libspdk_fsdev.so.1.0 00:04:33.229 SYMLINK libspdk_fsdev.so 00:04:33.488 LIB libspdk_nvme.a 00:04:33.488 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:33.488 SO libspdk_nvme.so.14.0 00:04:33.747 SYMLINK libspdk_nvme.so 00:04:34.316 LIB libspdk_fuse_dispatcher.a 00:04:34.316 SO libspdk_fuse_dispatcher.so.1.0 00:04:34.316 SYMLINK libspdk_fuse_dispatcher.so 00:04:34.886 LIB libspdk_blob.a 00:04:34.886 SO libspdk_blob.so.11.0 00:04:34.886 SYMLINK libspdk_blob.so 00:04:35.456 CC lib/blobfs/blobfs.o 00:04:35.456 CC lib/blobfs/tree.o 00:04:35.456 CC lib/lvol/lvol.o 00:04:35.456 LIB libspdk_bdev.a 00:04:35.456 SO libspdk_bdev.so.16.0 00:04:35.456 SYMLINK libspdk_bdev.so 00:04:35.715 CC lib/ublk/ublk.o 00:04:35.715 CC lib/ublk/ublk_rpc.o 00:04:35.715 CC lib/nbd/nbd.o 00:04:35.715 CC lib/nbd/nbd_rpc.o 00:04:35.715 CC lib/ftl/ftl_init.o 00:04:35.715 CC lib/ftl/ftl_core.o 00:04:35.715 CC lib/scsi/dev.o 00:04:35.715 CC lib/nvmf/ctrlr.o 00:04:35.975 CC lib/nvmf/ctrlr_discovery.o 00:04:35.975 CC lib/nvmf/ctrlr_bdev.o 00:04:35.975 CC lib/ftl/ftl_layout.o 00:04:35.975 CC lib/scsi/lun.o 00:04:36.235 CC lib/ftl/ftl_debug.o 00:04:36.235 LIB libspdk_blobfs.a 00:04:36.235 LIB libspdk_nbd.a 00:04:36.235 SO libspdk_blobfs.so.10.0 00:04:36.235 SO libspdk_nbd.so.7.0 00:04:36.235 SYMLINK libspdk_nbd.so 00:04:36.235 SYMLINK libspdk_blobfs.so 00:04:36.235 CC lib/ftl/ftl_io.o 00:04:36.235 CC lib/ftl/ftl_sb.o 00:04:36.235 LIB libspdk_lvol.a 00:04:36.235 CC lib/ftl/ftl_l2p.o 00:04:36.235 SO libspdk_lvol.so.10.0 00:04:36.235 CC lib/scsi/port.o 00:04:36.235 CC lib/scsi/scsi.o 00:04:36.235 SYMLINK libspdk_lvol.so 00:04:36.235 CC lib/scsi/scsi_bdev.o 00:04:36.495 CC lib/nvmf/subsystem.o 00:04:36.495 LIB libspdk_ublk.a 00:04:36.495 CC lib/ftl/ftl_l2p_flat.o 00:04:36.495 SO libspdk_ublk.so.3.0 00:04:36.495 CC lib/nvmf/nvmf.o 00:04:36.495 CC lib/nvmf/nvmf_rpc.o 00:04:36.495 CC lib/nvmf/transport.o 00:04:36.495 CC lib/ftl/ftl_nv_cache.o 00:04:36.495 SYMLINK libspdk_ublk.so 00:04:36.495 CC lib/scsi/scsi_pr.o 00:04:36.495 CC lib/scsi/scsi_rpc.o 00:04:36.755 CC lib/scsi/task.o 00:04:36.755 CC lib/nvmf/tcp.o 00:04:36.755 CC lib/nvmf/stubs.o 00:04:36.755 CC lib/nvmf/mdns_server.o 00:04:37.014 LIB libspdk_scsi.a 00:04:37.014 SO libspdk_scsi.so.9.0 00:04:37.014 SYMLINK libspdk_scsi.so 00:04:37.014 CC lib/ftl/ftl_band.o 00:04:37.274 CC lib/nvmf/rdma.o 00:04:37.274 CC lib/nvmf/auth.o 00:04:37.274 CC lib/ftl/ftl_band_ops.o 00:04:37.274 CC lib/iscsi/conn.o 00:04:37.533 CC lib/iscsi/init_grp.o 00:04:37.533 CC lib/iscsi/iscsi.o 00:04:37.533 CC lib/vhost/vhost.o 00:04:37.533 CC lib/ftl/ftl_writer.o 00:04:37.533 CC lib/vhost/vhost_rpc.o 00:04:37.793 CC lib/iscsi/param.o 00:04:37.793 CC lib/vhost/vhost_scsi.o 00:04:37.793 CC lib/ftl/ftl_rq.o 00:04:38.053 CC lib/ftl/ftl_reloc.o 00:04:38.053 CC lib/iscsi/portal_grp.o 00:04:38.053 CC lib/iscsi/tgt_node.o 00:04:38.053 CC lib/vhost/vhost_blk.o 00:04:38.053 CC lib/ftl/ftl_l2p_cache.o 00:04:38.311 CC lib/vhost/rte_vhost_user.o 00:04:38.311 CC lib/iscsi/iscsi_subsystem.o 00:04:38.311 CC lib/iscsi/iscsi_rpc.o 00:04:38.311 CC lib/ftl/ftl_p2l.o 00:04:38.570 CC lib/ftl/ftl_p2l_log.o 00:04:38.570 CC lib/ftl/mngt/ftl_mngt.o 00:04:38.829 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:38.829 CC lib/iscsi/task.o 00:04:38.829 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:38.829 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:38.829 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:38.829 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:38.829 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:38.829 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:38.829 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:38.829 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:38.829 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:39.088 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:39.088 LIB libspdk_iscsi.a 00:04:39.088 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:39.088 CC lib/ftl/utils/ftl_conf.o 00:04:39.088 CC lib/ftl/utils/ftl_md.o 00:04:39.088 CC lib/ftl/utils/ftl_mempool.o 00:04:39.088 CC lib/ftl/utils/ftl_bitmap.o 00:04:39.088 LIB libspdk_vhost.a 00:04:39.088 SO libspdk_iscsi.so.8.0 00:04:39.349 SO libspdk_vhost.so.8.0 00:04:39.349 CC lib/ftl/utils/ftl_property.o 00:04:39.349 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:39.349 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:39.349 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:39.349 SYMLINK libspdk_vhost.so 00:04:39.349 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:39.349 SYMLINK libspdk_iscsi.so 00:04:39.349 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:39.349 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:39.349 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:39.349 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:39.608 LIB libspdk_nvmf.a 00:04:39.608 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:39.608 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:39.608 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:39.608 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:39.608 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:39.608 CC lib/ftl/base/ftl_base_dev.o 00:04:39.608 CC lib/ftl/base/ftl_base_bdev.o 00:04:39.608 SO libspdk_nvmf.so.19.0 00:04:39.608 CC lib/ftl/ftl_trace.o 00:04:39.867 SYMLINK libspdk_nvmf.so 00:04:39.867 LIB libspdk_ftl.a 00:04:40.126 SO libspdk_ftl.so.9.0 00:04:40.386 SYMLINK libspdk_ftl.so 00:04:40.645 CC module/env_dpdk/env_dpdk_rpc.o 00:04:40.904 CC module/blob/bdev/blob_bdev.o 00:04:40.904 CC module/accel/error/accel_error.o 00:04:40.904 CC module/fsdev/aio/fsdev_aio.o 00:04:40.904 CC module/sock/posix/posix.o 00:04:40.904 CC module/keyring/linux/keyring.o 00:04:40.904 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:40.904 CC module/scheduler/gscheduler/gscheduler.o 00:04:40.904 CC module/keyring/file/keyring.o 00:04:40.904 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:40.904 LIB libspdk_env_dpdk_rpc.a 00:04:40.904 SO libspdk_env_dpdk_rpc.so.6.0 00:04:40.904 SYMLINK libspdk_env_dpdk_rpc.so 00:04:40.904 CC module/keyring/linux/keyring_rpc.o 00:04:40.904 CC module/keyring/file/keyring_rpc.o 00:04:40.904 LIB libspdk_scheduler_dpdk_governor.a 00:04:40.904 LIB libspdk_scheduler_gscheduler.a 00:04:40.904 CC module/accel/error/accel_error_rpc.o 00:04:40.904 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:40.904 SO libspdk_scheduler_gscheduler.so.4.0 00:04:40.904 LIB libspdk_scheduler_dynamic.a 00:04:41.164 SO libspdk_scheduler_dynamic.so.4.0 00:04:41.164 SYMLINK libspdk_scheduler_gscheduler.so 00:04:41.164 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:41.164 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:41.164 CC module/fsdev/aio/linux_aio_mgr.o 00:04:41.164 LIB libspdk_keyring_linux.a 00:04:41.164 LIB libspdk_blob_bdev.a 00:04:41.164 CC module/accel/ioat/accel_ioat.o 00:04:41.164 LIB libspdk_keyring_file.a 00:04:41.164 SYMLINK libspdk_scheduler_dynamic.so 00:04:41.164 SO libspdk_keyring_linux.so.1.0 00:04:41.164 SO libspdk_blob_bdev.so.11.0 00:04:41.164 LIB libspdk_accel_error.a 00:04:41.164 SO libspdk_keyring_file.so.2.0 00:04:41.164 SO libspdk_accel_error.so.2.0 00:04:41.164 SYMLINK libspdk_keyring_linux.so 00:04:41.164 SYMLINK libspdk_blob_bdev.so 00:04:41.164 SYMLINK libspdk_keyring_file.so 00:04:41.164 CC module/accel/ioat/accel_ioat_rpc.o 00:04:41.164 SYMLINK libspdk_accel_error.so 00:04:41.164 CC module/accel/dsa/accel_dsa.o 00:04:41.164 CC module/accel/dsa/accel_dsa_rpc.o 00:04:41.424 LIB libspdk_accel_ioat.a 00:04:41.424 SO libspdk_accel_ioat.so.6.0 00:04:41.424 CC module/accel/iaa/accel_iaa.o 00:04:41.424 CC module/accel/iaa/accel_iaa_rpc.o 00:04:41.424 CC module/bdev/delay/vbdev_delay.o 00:04:41.424 SYMLINK libspdk_accel_ioat.so 00:04:41.424 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:41.424 CC module/bdev/error/vbdev_error.o 00:04:41.424 CC module/blobfs/bdev/blobfs_bdev.o 00:04:41.424 CC module/bdev/gpt/gpt.o 00:04:41.424 LIB libspdk_fsdev_aio.a 00:04:41.424 CC module/bdev/error/vbdev_error_rpc.o 00:04:41.424 SO libspdk_fsdev_aio.so.1.0 00:04:41.424 LIB libspdk_accel_dsa.a 00:04:41.684 SO libspdk_accel_dsa.so.5.0 00:04:41.684 LIB libspdk_accel_iaa.a 00:04:41.684 CC module/bdev/gpt/vbdev_gpt.o 00:04:41.684 SYMLINK libspdk_fsdev_aio.so 00:04:41.684 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:41.684 SYMLINK libspdk_accel_dsa.so 00:04:41.684 SO libspdk_accel_iaa.so.3.0 00:04:41.684 LIB libspdk_sock_posix.a 00:04:41.684 SO libspdk_sock_posix.so.6.0 00:04:41.684 SYMLINK libspdk_accel_iaa.so 00:04:41.684 LIB libspdk_bdev_error.a 00:04:41.684 SO libspdk_bdev_error.so.6.0 00:04:41.684 SYMLINK libspdk_sock_posix.so 00:04:41.684 CC module/bdev/lvol/vbdev_lvol.o 00:04:41.684 CC module/bdev/malloc/bdev_malloc.o 00:04:41.684 LIB libspdk_blobfs_bdev.a 00:04:41.684 LIB libspdk_bdev_delay.a 00:04:41.684 SYMLINK libspdk_bdev_error.so 00:04:41.684 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:41.684 CC module/bdev/null/bdev_null.o 00:04:41.684 SO libspdk_blobfs_bdev.so.6.0 00:04:41.684 CC module/bdev/passthru/vbdev_passthru.o 00:04:41.684 SO libspdk_bdev_delay.so.6.0 00:04:41.946 CC module/bdev/nvme/bdev_nvme.o 00:04:41.946 SYMLINK libspdk_blobfs_bdev.so 00:04:41.946 CC module/bdev/raid/bdev_raid.o 00:04:41.946 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:41.946 SYMLINK libspdk_bdev_delay.so 00:04:41.946 CC module/bdev/raid/bdev_raid_rpc.o 00:04:41.946 LIB libspdk_bdev_gpt.a 00:04:41.946 SO libspdk_bdev_gpt.so.6.0 00:04:41.946 SYMLINK libspdk_bdev_gpt.so 00:04:41.946 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:41.946 CC module/bdev/null/bdev_null_rpc.o 00:04:42.205 CC module/bdev/split/vbdev_split.o 00:04:42.205 LIB libspdk_bdev_passthru.a 00:04:42.205 CC module/bdev/raid/bdev_raid_sb.o 00:04:42.205 SO libspdk_bdev_passthru.so.6.0 00:04:42.206 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:42.206 SYMLINK libspdk_bdev_passthru.so 00:04:42.206 CC module/bdev/split/vbdev_split_rpc.o 00:04:42.206 LIB libspdk_bdev_null.a 00:04:42.206 SO libspdk_bdev_null.so.6.0 00:04:42.206 LIB libspdk_bdev_lvol.a 00:04:42.206 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:42.206 SO libspdk_bdev_lvol.so.6.0 00:04:42.206 SYMLINK libspdk_bdev_null.so 00:04:42.206 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:42.206 CC module/bdev/nvme/nvme_rpc.o 00:04:42.206 LIB libspdk_bdev_malloc.a 00:04:42.206 LIB libspdk_bdev_split.a 00:04:42.466 SYMLINK libspdk_bdev_lvol.so 00:04:42.466 CC module/bdev/raid/raid0.o 00:04:42.466 SO libspdk_bdev_malloc.so.6.0 00:04:42.466 SO libspdk_bdev_split.so.6.0 00:04:42.466 SYMLINK libspdk_bdev_split.so 00:04:42.466 SYMLINK libspdk_bdev_malloc.so 00:04:42.466 CC module/bdev/raid/raid1.o 00:04:42.466 CC module/bdev/raid/concat.o 00:04:42.466 CC module/bdev/raid/raid5f.o 00:04:42.466 CC module/bdev/nvme/bdev_mdns_client.o 00:04:42.466 CC module/bdev/aio/bdev_aio.o 00:04:42.725 LIB libspdk_bdev_zone_block.a 00:04:42.725 CC module/bdev/nvme/vbdev_opal.o 00:04:42.725 SO libspdk_bdev_zone_block.so.6.0 00:04:42.725 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:42.725 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:42.725 SYMLINK libspdk_bdev_zone_block.so 00:04:42.725 CC module/bdev/aio/bdev_aio_rpc.o 00:04:42.725 CC module/bdev/ftl/bdev_ftl.o 00:04:42.725 CC module/bdev/iscsi/bdev_iscsi.o 00:04:42.725 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:42.725 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:42.983 LIB libspdk_bdev_aio.a 00:04:42.983 SO libspdk_bdev_aio.so.6.0 00:04:42.983 SYMLINK libspdk_bdev_aio.so 00:04:42.983 LIB libspdk_bdev_raid.a 00:04:42.983 SO libspdk_bdev_raid.so.6.0 00:04:42.983 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:42.983 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:42.983 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:42.983 LIB libspdk_bdev_ftl.a 00:04:43.241 SO libspdk_bdev_ftl.so.6.0 00:04:43.241 SYMLINK libspdk_bdev_raid.so 00:04:43.241 SYMLINK libspdk_bdev_ftl.so 00:04:43.241 LIB libspdk_bdev_iscsi.a 00:04:43.241 SO libspdk_bdev_iscsi.so.6.0 00:04:43.241 SYMLINK libspdk_bdev_iscsi.so 00:04:43.500 LIB libspdk_bdev_virtio.a 00:04:43.768 SO libspdk_bdev_virtio.so.6.0 00:04:43.768 SYMLINK libspdk_bdev_virtio.so 00:04:44.337 LIB libspdk_bdev_nvme.a 00:04:44.337 SO libspdk_bdev_nvme.so.7.0 00:04:44.337 SYMLINK libspdk_bdev_nvme.so 00:04:44.905 CC module/event/subsystems/iobuf/iobuf.o 00:04:44.905 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:44.905 CC module/event/subsystems/fsdev/fsdev.o 00:04:44.905 CC module/event/subsystems/sock/sock.o 00:04:44.905 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:44.905 CC module/event/subsystems/vmd/vmd.o 00:04:44.905 CC module/event/subsystems/scheduler/scheduler.o 00:04:44.905 CC module/event/subsystems/keyring/keyring.o 00:04:44.905 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:45.166 LIB libspdk_event_sock.a 00:04:45.166 LIB libspdk_event_keyring.a 00:04:45.166 LIB libspdk_event_iobuf.a 00:04:45.166 LIB libspdk_event_vmd.a 00:04:45.166 LIB libspdk_event_scheduler.a 00:04:45.166 LIB libspdk_event_fsdev.a 00:04:45.166 SO libspdk_event_keyring.so.1.0 00:04:45.166 SO libspdk_event_sock.so.5.0 00:04:45.166 SO libspdk_event_iobuf.so.3.0 00:04:45.166 SO libspdk_event_scheduler.so.4.0 00:04:45.166 LIB libspdk_event_vhost_blk.a 00:04:45.166 SO libspdk_event_vmd.so.6.0 00:04:45.166 SO libspdk_event_fsdev.so.1.0 00:04:45.166 SO libspdk_event_vhost_blk.so.3.0 00:04:45.166 SYMLINK libspdk_event_sock.so 00:04:45.166 SYMLINK libspdk_event_iobuf.so 00:04:45.166 SYMLINK libspdk_event_keyring.so 00:04:45.166 SYMLINK libspdk_event_scheduler.so 00:04:45.166 SYMLINK libspdk_event_vmd.so 00:04:45.166 SYMLINK libspdk_event_fsdev.so 00:04:45.166 SYMLINK libspdk_event_vhost_blk.so 00:04:45.425 CC module/event/subsystems/accel/accel.o 00:04:45.692 LIB libspdk_event_accel.a 00:04:45.692 SO libspdk_event_accel.so.6.0 00:04:45.692 SYMLINK libspdk_event_accel.so 00:04:46.276 CC module/event/subsystems/bdev/bdev.o 00:04:46.276 LIB libspdk_event_bdev.a 00:04:46.276 SO libspdk_event_bdev.so.6.0 00:04:46.276 SYMLINK libspdk_event_bdev.so 00:04:46.851 CC module/event/subsystems/ublk/ublk.o 00:04:46.851 CC module/event/subsystems/scsi/scsi.o 00:04:46.851 CC module/event/subsystems/nbd/nbd.o 00:04:46.851 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:46.851 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:46.851 LIB libspdk_event_ublk.a 00:04:46.851 LIB libspdk_event_scsi.a 00:04:46.851 SO libspdk_event_ublk.so.3.0 00:04:46.851 LIB libspdk_event_nbd.a 00:04:46.851 SO libspdk_event_scsi.so.6.0 00:04:46.851 SO libspdk_event_nbd.so.6.0 00:04:46.851 SYMLINK libspdk_event_ublk.so 00:04:46.851 SYMLINK libspdk_event_nbd.so 00:04:46.851 LIB libspdk_event_nvmf.a 00:04:46.851 SYMLINK libspdk_event_scsi.so 00:04:46.851 SO libspdk_event_nvmf.so.6.0 00:04:47.111 SYMLINK libspdk_event_nvmf.so 00:04:47.371 CC module/event/subsystems/iscsi/iscsi.o 00:04:47.371 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:47.371 LIB libspdk_event_iscsi.a 00:04:47.371 LIB libspdk_event_vhost_scsi.a 00:04:47.371 SO libspdk_event_iscsi.so.6.0 00:04:47.630 SO libspdk_event_vhost_scsi.so.3.0 00:04:47.630 SYMLINK libspdk_event_iscsi.so 00:04:47.630 SYMLINK libspdk_event_vhost_scsi.so 00:04:47.888 SO libspdk.so.6.0 00:04:47.888 SYMLINK libspdk.so 00:04:48.146 TEST_HEADER include/spdk/accel.h 00:04:48.146 TEST_HEADER include/spdk/accel_module.h 00:04:48.146 TEST_HEADER include/spdk/assert.h 00:04:48.146 TEST_HEADER include/spdk/barrier.h 00:04:48.146 CXX app/trace/trace.o 00:04:48.146 TEST_HEADER include/spdk/base64.h 00:04:48.146 CC test/rpc_client/rpc_client_test.o 00:04:48.146 TEST_HEADER include/spdk/bdev.h 00:04:48.146 TEST_HEADER include/spdk/bdev_module.h 00:04:48.146 TEST_HEADER include/spdk/bdev_zone.h 00:04:48.146 TEST_HEADER include/spdk/bit_array.h 00:04:48.146 TEST_HEADER include/spdk/bit_pool.h 00:04:48.146 TEST_HEADER include/spdk/blob_bdev.h 00:04:48.146 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:48.146 TEST_HEADER include/spdk/blobfs.h 00:04:48.146 TEST_HEADER include/spdk/blob.h 00:04:48.146 TEST_HEADER include/spdk/conf.h 00:04:48.146 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:48.146 TEST_HEADER include/spdk/config.h 00:04:48.146 TEST_HEADER include/spdk/cpuset.h 00:04:48.146 TEST_HEADER include/spdk/crc16.h 00:04:48.146 TEST_HEADER include/spdk/crc32.h 00:04:48.146 TEST_HEADER include/spdk/crc64.h 00:04:48.146 TEST_HEADER include/spdk/dif.h 00:04:48.146 TEST_HEADER include/spdk/dma.h 00:04:48.146 TEST_HEADER include/spdk/endian.h 00:04:48.146 TEST_HEADER include/spdk/env_dpdk.h 00:04:48.146 TEST_HEADER include/spdk/env.h 00:04:48.146 TEST_HEADER include/spdk/event.h 00:04:48.146 TEST_HEADER include/spdk/fd_group.h 00:04:48.146 TEST_HEADER include/spdk/fd.h 00:04:48.146 TEST_HEADER include/spdk/file.h 00:04:48.146 TEST_HEADER include/spdk/fsdev.h 00:04:48.146 TEST_HEADER include/spdk/fsdev_module.h 00:04:48.146 TEST_HEADER include/spdk/ftl.h 00:04:48.146 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:48.146 TEST_HEADER include/spdk/gpt_spec.h 00:04:48.146 TEST_HEADER include/spdk/hexlify.h 00:04:48.146 TEST_HEADER include/spdk/histogram_data.h 00:04:48.146 TEST_HEADER include/spdk/idxd.h 00:04:48.146 TEST_HEADER include/spdk/idxd_spec.h 00:04:48.146 TEST_HEADER include/spdk/init.h 00:04:48.146 CC examples/util/zipf/zipf.o 00:04:48.146 TEST_HEADER include/spdk/ioat.h 00:04:48.146 CC examples/ioat/perf/perf.o 00:04:48.146 TEST_HEADER include/spdk/ioat_spec.h 00:04:48.146 TEST_HEADER include/spdk/iscsi_spec.h 00:04:48.146 CC test/thread/poller_perf/poller_perf.o 00:04:48.146 TEST_HEADER include/spdk/json.h 00:04:48.146 TEST_HEADER include/spdk/jsonrpc.h 00:04:48.146 TEST_HEADER include/spdk/keyring.h 00:04:48.146 TEST_HEADER include/spdk/keyring_module.h 00:04:48.146 TEST_HEADER include/spdk/likely.h 00:04:48.146 TEST_HEADER include/spdk/log.h 00:04:48.146 TEST_HEADER include/spdk/lvol.h 00:04:48.146 TEST_HEADER include/spdk/md5.h 00:04:48.146 TEST_HEADER include/spdk/memory.h 00:04:48.146 CC test/dma/test_dma/test_dma.o 00:04:48.146 TEST_HEADER include/spdk/mmio.h 00:04:48.146 TEST_HEADER include/spdk/nbd.h 00:04:48.146 TEST_HEADER include/spdk/net.h 00:04:48.146 CC test/app/bdev_svc/bdev_svc.o 00:04:48.146 TEST_HEADER include/spdk/notify.h 00:04:48.146 TEST_HEADER include/spdk/nvme.h 00:04:48.146 TEST_HEADER include/spdk/nvme_intel.h 00:04:48.146 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:48.146 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:48.146 TEST_HEADER include/spdk/nvme_spec.h 00:04:48.146 TEST_HEADER include/spdk/nvme_zns.h 00:04:48.146 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:48.146 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:48.146 TEST_HEADER include/spdk/nvmf.h 00:04:48.146 TEST_HEADER include/spdk/nvmf_spec.h 00:04:48.146 TEST_HEADER include/spdk/nvmf_transport.h 00:04:48.146 TEST_HEADER include/spdk/opal.h 00:04:48.146 TEST_HEADER include/spdk/opal_spec.h 00:04:48.146 TEST_HEADER include/spdk/pci_ids.h 00:04:48.146 TEST_HEADER include/spdk/pipe.h 00:04:48.146 TEST_HEADER include/spdk/queue.h 00:04:48.146 TEST_HEADER include/spdk/reduce.h 00:04:48.146 TEST_HEADER include/spdk/rpc.h 00:04:48.146 TEST_HEADER include/spdk/scheduler.h 00:04:48.146 TEST_HEADER include/spdk/scsi.h 00:04:48.146 TEST_HEADER include/spdk/scsi_spec.h 00:04:48.146 TEST_HEADER include/spdk/sock.h 00:04:48.146 CC test/env/mem_callbacks/mem_callbacks.o 00:04:48.146 TEST_HEADER include/spdk/stdinc.h 00:04:48.146 TEST_HEADER include/spdk/string.h 00:04:48.146 TEST_HEADER include/spdk/thread.h 00:04:48.146 TEST_HEADER include/spdk/trace.h 00:04:48.146 TEST_HEADER include/spdk/trace_parser.h 00:04:48.146 TEST_HEADER include/spdk/tree.h 00:04:48.146 LINK rpc_client_test 00:04:48.146 TEST_HEADER include/spdk/ublk.h 00:04:48.146 TEST_HEADER include/spdk/util.h 00:04:48.146 LINK interrupt_tgt 00:04:48.146 TEST_HEADER include/spdk/uuid.h 00:04:48.146 LINK zipf 00:04:48.146 TEST_HEADER include/spdk/version.h 00:04:48.146 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:48.146 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:48.146 TEST_HEADER include/spdk/vhost.h 00:04:48.146 TEST_HEADER include/spdk/vmd.h 00:04:48.146 LINK poller_perf 00:04:48.146 TEST_HEADER include/spdk/xor.h 00:04:48.146 TEST_HEADER include/spdk/zipf.h 00:04:48.146 CXX test/cpp_headers/accel.o 00:04:48.405 LINK ioat_perf 00:04:48.405 LINK bdev_svc 00:04:48.405 LINK spdk_trace 00:04:48.405 CXX test/cpp_headers/accel_module.o 00:04:48.405 CC app/trace_record/trace_record.o 00:04:48.665 CC test/app/histogram_perf/histogram_perf.o 00:04:48.665 CC examples/ioat/verify/verify.o 00:04:48.665 CXX test/cpp_headers/assert.o 00:04:48.665 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:48.665 CC test/app/jsoncat/jsoncat.o 00:04:48.665 CC test/event/event_perf/event_perf.o 00:04:48.665 LINK histogram_perf 00:04:48.665 CC test/event/reactor/reactor.o 00:04:48.665 LINK mem_callbacks 00:04:48.665 LINK test_dma 00:04:48.665 CXX test/cpp_headers/barrier.o 00:04:48.665 LINK spdk_trace_record 00:04:48.665 LINK verify 00:04:48.665 LINK jsoncat 00:04:48.925 LINK event_perf 00:04:48.925 LINK reactor 00:04:48.925 CXX test/cpp_headers/base64.o 00:04:48.925 CC test/app/stub/stub.o 00:04:48.925 CC test/env/vtophys/vtophys.o 00:04:48.925 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:48.925 CC app/nvmf_tgt/nvmf_main.o 00:04:49.185 CXX test/cpp_headers/bdev.o 00:04:49.185 LINK vtophys 00:04:49.185 LINK nvme_fuzz 00:04:49.185 CC test/event/reactor_perf/reactor_perf.o 00:04:49.185 CC examples/sock/hello_world/hello_sock.o 00:04:49.185 LINK stub 00:04:49.185 CC examples/thread/thread/thread_ex.o 00:04:49.185 LINK env_dpdk_post_init 00:04:49.185 CC test/accel/dif/dif.o 00:04:49.185 LINK nvmf_tgt 00:04:49.185 LINK reactor_perf 00:04:49.185 CXX test/cpp_headers/bdev_module.o 00:04:49.185 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:49.185 CC test/event/app_repeat/app_repeat.o 00:04:49.185 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:49.443 LINK hello_sock 00:04:49.443 LINK thread 00:04:49.443 CC test/env/memory/memory_ut.o 00:04:49.443 CC test/env/pci/pci_ut.o 00:04:49.443 CXX test/cpp_headers/bdev_zone.o 00:04:49.443 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:49.443 LINK app_repeat 00:04:49.443 CC app/iscsi_tgt/iscsi_tgt.o 00:04:49.703 CXX test/cpp_headers/bit_array.o 00:04:49.703 CC examples/vmd/lsvmd/lsvmd.o 00:04:49.703 CC test/blobfs/mkfs/mkfs.o 00:04:49.703 CXX test/cpp_headers/bit_pool.o 00:04:49.703 CC test/event/scheduler/scheduler.o 00:04:49.703 LINK iscsi_tgt 00:04:49.703 LINK lsvmd 00:04:49.703 LINK pci_ut 00:04:49.962 LINK vhost_fuzz 00:04:49.962 CXX test/cpp_headers/blob_bdev.o 00:04:49.962 LINK dif 00:04:49.962 LINK mkfs 00:04:49.962 LINK scheduler 00:04:49.962 CXX test/cpp_headers/blobfs_bdev.o 00:04:49.962 CXX test/cpp_headers/blobfs.o 00:04:49.962 CC examples/vmd/led/led.o 00:04:50.221 CC app/spdk_tgt/spdk_tgt.o 00:04:50.221 CXX test/cpp_headers/blob.o 00:04:50.221 LINK led 00:04:50.221 CC test/nvme/aer/aer.o 00:04:50.221 CC examples/idxd/perf/perf.o 00:04:50.221 CC test/lvol/esnap/esnap.o 00:04:50.221 CXX test/cpp_headers/conf.o 00:04:50.221 CC test/nvme/reset/reset.o 00:04:50.221 LINK spdk_tgt 00:04:50.483 CXX test/cpp_headers/config.o 00:04:50.483 CC test/bdev/bdevio/bdevio.o 00:04:50.483 CC test/nvme/sgl/sgl.o 00:04:50.483 CXX test/cpp_headers/cpuset.o 00:04:50.483 LINK aer 00:04:50.483 LINK memory_ut 00:04:50.483 LINK idxd_perf 00:04:50.483 LINK reset 00:04:50.483 CC app/spdk_lspci/spdk_lspci.o 00:04:50.743 CXX test/cpp_headers/crc16.o 00:04:50.743 LINK spdk_lspci 00:04:50.743 LINK sgl 00:04:50.743 CXX test/cpp_headers/crc32.o 00:04:50.743 CC test/nvme/e2edp/nvme_dp.o 00:04:50.743 LINK bdevio 00:04:50.743 CC app/spdk_nvme_perf/perf.o 00:04:50.743 CC examples/accel/perf/accel_perf.o 00:04:51.002 CXX test/cpp_headers/crc64.o 00:04:51.002 CXX test/cpp_headers/dif.o 00:04:51.002 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:51.002 CXX test/cpp_headers/dma.o 00:04:51.002 LINK nvme_dp 00:04:51.002 CXX test/cpp_headers/endian.o 00:04:51.002 CC app/spdk_nvme_discover/discovery_aer.o 00:04:51.002 CC app/spdk_nvme_identify/identify.o 00:04:51.261 LINK iscsi_fuzz 00:04:51.261 LINK hello_fsdev 00:04:51.261 CC examples/blob/hello_world/hello_blob.o 00:04:51.261 CXX test/cpp_headers/env_dpdk.o 00:04:51.261 CC test/nvme/overhead/overhead.o 00:04:51.261 LINK spdk_nvme_discover 00:04:51.261 CXX test/cpp_headers/env.o 00:04:51.261 CXX test/cpp_headers/event.o 00:04:51.261 LINK accel_perf 00:04:51.521 LINK hello_blob 00:04:51.521 CC examples/nvme/hello_world/hello_world.o 00:04:51.521 CXX test/cpp_headers/fd_group.o 00:04:51.521 LINK overhead 00:04:51.521 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:51.522 CC examples/nvme/reconnect/reconnect.o 00:04:51.522 CC app/spdk_top/spdk_top.o 00:04:51.522 CXX test/cpp_headers/fd.o 00:04:51.782 LINK spdk_nvme_perf 00:04:51.782 CC examples/blob/cli/blobcli.o 00:04:51.782 LINK hello_world 00:04:51.782 CC test/nvme/err_injection/err_injection.o 00:04:51.782 CXX test/cpp_headers/file.o 00:04:51.782 LINK reconnect 00:04:52.043 CXX test/cpp_headers/fsdev.o 00:04:52.043 LINK err_injection 00:04:52.043 LINK spdk_nvme_identify 00:04:52.043 CC app/vhost/vhost.o 00:04:52.043 CC app/spdk_dd/spdk_dd.o 00:04:52.043 LINK nvme_manage 00:04:52.043 CXX test/cpp_headers/fsdev_module.o 00:04:52.043 CC examples/nvme/arbitration/arbitration.o 00:04:52.303 LINK blobcli 00:04:52.303 CC test/nvme/startup/startup.o 00:04:52.303 LINK vhost 00:04:52.303 CC examples/nvme/hotplug/hotplug.o 00:04:52.303 CXX test/cpp_headers/ftl.o 00:04:52.303 LINK startup 00:04:52.303 LINK spdk_dd 00:04:52.303 CC examples/bdev/hello_world/hello_bdev.o 00:04:52.562 LINK arbitration 00:04:52.562 LINK hotplug 00:04:52.562 CXX test/cpp_headers/fuse_dispatcher.o 00:04:52.562 CC examples/bdev/bdevperf/bdevperf.o 00:04:52.562 LINK spdk_top 00:04:52.562 CC app/fio/nvme/fio_plugin.o 00:04:52.562 CXX test/cpp_headers/gpt_spec.o 00:04:52.562 CC test/nvme/reserve/reserve.o 00:04:52.562 LINK hello_bdev 00:04:52.562 CXX test/cpp_headers/hexlify.o 00:04:52.562 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:52.822 CC examples/nvme/abort/abort.o 00:04:52.822 CXX test/cpp_headers/histogram_data.o 00:04:52.822 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:52.822 CXX test/cpp_headers/idxd.o 00:04:52.822 LINK reserve 00:04:52.822 CC app/fio/bdev/fio_plugin.o 00:04:52.822 LINK cmb_copy 00:04:52.822 LINK pmr_persistence 00:04:52.822 CXX test/cpp_headers/idxd_spec.o 00:04:53.083 CC test/nvme/simple_copy/simple_copy.o 00:04:53.083 CC test/nvme/connect_stress/connect_stress.o 00:04:53.083 LINK abort 00:04:53.083 CXX test/cpp_headers/init.o 00:04:53.083 CXX test/cpp_headers/ioat.o 00:04:53.083 CC test/nvme/boot_partition/boot_partition.o 00:04:53.083 LINK spdk_nvme 00:04:53.084 CXX test/cpp_headers/ioat_spec.o 00:04:53.084 LINK simple_copy 00:04:53.084 LINK connect_stress 00:04:53.084 CXX test/cpp_headers/iscsi_spec.o 00:04:53.343 LINK boot_partition 00:04:53.343 CC test/nvme/compliance/nvme_compliance.o 00:04:53.343 LINK spdk_bdev 00:04:53.343 CC test/nvme/fused_ordering/fused_ordering.o 00:04:53.343 LINK bdevperf 00:04:53.343 CXX test/cpp_headers/json.o 00:04:53.343 CXX test/cpp_headers/jsonrpc.o 00:04:53.343 CXX test/cpp_headers/keyring.o 00:04:53.343 CXX test/cpp_headers/keyring_module.o 00:04:53.343 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:53.343 CC test/nvme/fdp/fdp.o 00:04:53.603 CXX test/cpp_headers/likely.o 00:04:53.603 CXX test/cpp_headers/log.o 00:04:53.603 LINK fused_ordering 00:04:53.603 CXX test/cpp_headers/lvol.o 00:04:53.603 CC test/nvme/cuse/cuse.o 00:04:53.603 LINK doorbell_aers 00:04:53.603 LINK nvme_compliance 00:04:53.603 CXX test/cpp_headers/md5.o 00:04:53.603 CXX test/cpp_headers/memory.o 00:04:53.603 CXX test/cpp_headers/mmio.o 00:04:53.603 CXX test/cpp_headers/nbd.o 00:04:53.603 CC examples/nvmf/nvmf/nvmf.o 00:04:53.603 CXX test/cpp_headers/net.o 00:04:53.862 CXX test/cpp_headers/notify.o 00:04:53.862 CXX test/cpp_headers/nvme.o 00:04:53.862 CXX test/cpp_headers/nvme_intel.o 00:04:53.862 LINK fdp 00:04:53.862 CXX test/cpp_headers/nvme_ocssd.o 00:04:53.862 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:53.862 CXX test/cpp_headers/nvme_spec.o 00:04:53.862 CXX test/cpp_headers/nvme_zns.o 00:04:53.862 CXX test/cpp_headers/nvmf_cmd.o 00:04:53.862 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:53.862 CXX test/cpp_headers/nvmf.o 00:04:53.862 LINK nvmf 00:04:53.862 CXX test/cpp_headers/nvmf_spec.o 00:04:54.124 CXX test/cpp_headers/nvmf_transport.o 00:04:54.124 CXX test/cpp_headers/opal.o 00:04:54.124 CXX test/cpp_headers/opal_spec.o 00:04:54.124 CXX test/cpp_headers/pci_ids.o 00:04:54.124 CXX test/cpp_headers/pipe.o 00:04:54.124 CXX test/cpp_headers/queue.o 00:04:54.124 CXX test/cpp_headers/reduce.o 00:04:54.124 CXX test/cpp_headers/rpc.o 00:04:54.124 CXX test/cpp_headers/scheduler.o 00:04:54.124 CXX test/cpp_headers/scsi.o 00:04:54.124 CXX test/cpp_headers/scsi_spec.o 00:04:54.124 CXX test/cpp_headers/sock.o 00:04:54.124 CXX test/cpp_headers/stdinc.o 00:04:54.385 CXX test/cpp_headers/string.o 00:04:54.385 CXX test/cpp_headers/thread.o 00:04:54.385 CXX test/cpp_headers/trace.o 00:04:54.385 CXX test/cpp_headers/trace_parser.o 00:04:54.385 CXX test/cpp_headers/tree.o 00:04:54.385 CXX test/cpp_headers/ublk.o 00:04:54.385 CXX test/cpp_headers/util.o 00:04:54.385 CXX test/cpp_headers/uuid.o 00:04:54.385 CXX test/cpp_headers/version.o 00:04:54.385 CXX test/cpp_headers/vfio_user_pci.o 00:04:54.385 CXX test/cpp_headers/vfio_user_spec.o 00:04:54.385 CXX test/cpp_headers/vhost.o 00:04:54.385 CXX test/cpp_headers/vmd.o 00:04:54.385 CXX test/cpp_headers/xor.o 00:04:54.385 CXX test/cpp_headers/zipf.o 00:04:54.954 LINK cuse 00:04:55.899 LINK esnap 00:04:56.159 00:04:56.159 real 1m11.558s 00:04:56.159 user 5m37.042s 00:04:56.159 sys 1m5.841s 00:04:56.159 23:00:15 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:56.159 23:00:15 make -- common/autotest_common.sh@10 -- $ set +x 00:04:56.159 ************************************ 00:04:56.159 END TEST make 00:04:56.159 ************************************ 00:04:56.159 23:00:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:56.159 23:00:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:56.159 23:00:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:56.159 23:00:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.159 23:00:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:56.159 23:00:15 -- pm/common@44 -- $ pid=6189 00:04:56.159 23:00:15 -- pm/common@50 -- $ kill -TERM 6189 00:04:56.159 23:00:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.159 23:00:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:56.159 23:00:15 -- pm/common@44 -- $ pid=6191 00:04:56.159 23:00:15 -- pm/common@50 -- $ kill -TERM 6191 00:04:56.420 23:00:15 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:56.420 23:00:15 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:56.420 23:00:15 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:56.420 23:00:15 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:56.420 23:00:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.420 23:00:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.420 23:00:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.420 23:00:15 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.420 23:00:15 -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.420 23:00:15 -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.420 23:00:15 -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.420 23:00:15 -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.420 23:00:15 -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.420 23:00:15 -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.420 23:00:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.420 23:00:15 -- scripts/common.sh@344 -- # case "$op" in 00:04:56.420 23:00:15 -- scripts/common.sh@345 -- # : 1 00:04:56.420 23:00:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.420 23:00:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.420 23:00:15 -- scripts/common.sh@365 -- # decimal 1 00:04:56.420 23:00:15 -- scripts/common.sh@353 -- # local d=1 00:04:56.420 23:00:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.420 23:00:15 -- scripts/common.sh@355 -- # echo 1 00:04:56.420 23:00:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.420 23:00:15 -- scripts/common.sh@366 -- # decimal 2 00:04:56.420 23:00:15 -- scripts/common.sh@353 -- # local d=2 00:04:56.420 23:00:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.420 23:00:15 -- scripts/common.sh@355 -- # echo 2 00:04:56.420 23:00:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.420 23:00:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.420 23:00:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.420 23:00:15 -- scripts/common.sh@368 -- # return 0 00:04:56.420 23:00:15 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.420 23:00:15 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:56.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.420 --rc genhtml_branch_coverage=1 00:04:56.420 --rc genhtml_function_coverage=1 00:04:56.420 --rc genhtml_legend=1 00:04:56.420 --rc geninfo_all_blocks=1 00:04:56.420 --rc geninfo_unexecuted_blocks=1 00:04:56.420 00:04:56.420 ' 00:04:56.420 23:00:15 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:56.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.420 --rc genhtml_branch_coverage=1 00:04:56.420 --rc genhtml_function_coverage=1 00:04:56.420 --rc genhtml_legend=1 00:04:56.420 --rc geninfo_all_blocks=1 00:04:56.420 --rc geninfo_unexecuted_blocks=1 00:04:56.420 00:04:56.420 ' 00:04:56.420 23:00:15 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:56.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.420 --rc genhtml_branch_coverage=1 00:04:56.420 --rc genhtml_function_coverage=1 00:04:56.420 --rc genhtml_legend=1 00:04:56.420 --rc geninfo_all_blocks=1 00:04:56.420 --rc geninfo_unexecuted_blocks=1 00:04:56.420 00:04:56.420 ' 00:04:56.420 23:00:15 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:56.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.420 --rc genhtml_branch_coverage=1 00:04:56.420 --rc genhtml_function_coverage=1 00:04:56.420 --rc genhtml_legend=1 00:04:56.420 --rc geninfo_all_blocks=1 00:04:56.420 --rc geninfo_unexecuted_blocks=1 00:04:56.420 00:04:56.420 ' 00:04:56.420 23:00:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.420 23:00:15 -- nvmf/common.sh@7 -- # uname -s 00:04:56.420 23:00:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.420 23:00:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.420 23:00:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.420 23:00:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.420 23:00:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.420 23:00:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.420 23:00:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.420 23:00:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.420 23:00:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.420 23:00:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.420 23:00:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6da4a535-f76d-49b4-b931-740a439f424b 00:04:56.420 23:00:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=6da4a535-f76d-49b4-b931-740a439f424b 00:04:56.420 23:00:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.420 23:00:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.420 23:00:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.420 23:00:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.420 23:00:15 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.420 23:00:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:56.420 23:00:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.420 23:00:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.420 23:00:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.420 23:00:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.420 23:00:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.420 23:00:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.420 23:00:15 -- paths/export.sh@5 -- # export PATH 00:04:56.420 23:00:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.420 23:00:15 -- nvmf/common.sh@51 -- # : 0 00:04:56.420 23:00:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:56.420 23:00:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:56.420 23:00:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.420 23:00:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.420 23:00:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.420 23:00:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:56.420 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:56.420 23:00:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:56.420 23:00:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:56.420 23:00:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.420 23:00:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:56.420 23:00:15 -- spdk/autotest.sh@32 -- # uname -s 00:04:56.420 23:00:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:56.420 23:00:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:56.420 23:00:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.420 23:00:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:56.420 23:00:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.420 23:00:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:56.420 23:00:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:56.420 23:00:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:56.420 23:00:15 -- spdk/autotest.sh@48 -- # udevadm_pid=66755 00:04:56.420 23:00:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:56.420 23:00:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:56.420 23:00:15 -- pm/common@17 -- # local monitor 00:04:56.420 23:00:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.420 23:00:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.420 23:00:15 -- pm/common@21 -- # date +%s 00:04:56.420 23:00:15 -- pm/common@25 -- # sleep 1 00:04:56.420 23:00:15 -- pm/common@21 -- # date +%s 00:04:56.420 23:00:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731970815 00:04:56.681 23:00:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731970815 00:04:56.681 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731970815_collect-vmstat.pm.log 00:04:56.681 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731970815_collect-cpu-load.pm.log 00:04:57.621 23:00:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:57.621 23:00:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:57.621 23:00:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.621 23:00:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.621 23:00:16 -- spdk/autotest.sh@59 -- # create_test_list 00:04:57.621 23:00:16 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:57.621 23:00:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.621 23:00:16 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:57.621 23:00:16 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:57.621 23:00:16 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:57.621 23:00:16 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:57.621 23:00:16 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:57.621 23:00:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:57.621 23:00:16 -- common/autotest_common.sh@1455 -- # uname 00:04:57.621 23:00:16 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:57.621 23:00:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:57.621 23:00:16 -- common/autotest_common.sh@1475 -- # uname 00:04:57.621 23:00:16 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:57.621 23:00:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:57.621 23:00:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:57.621 lcov: LCOV version 1.15 00:04:57.621 23:00:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:12.550 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:12.550 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:24.801 23:00:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:24.801 23:00:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.801 23:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.801 23:00:43 -- spdk/autotest.sh@78 -- # rm -f 00:05:24.801 23:00:43 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.371 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:25.371 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:25.371 23:00:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:25.371 23:00:44 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:25.371 23:00:44 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:25.371 23:00:44 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:25.371 23:00:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:25.371 23:00:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:25.371 23:00:44 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:25.371 23:00:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:25.371 23:00:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:25.371 23:00:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:25.371 23:00:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:25.371 23:00:44 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:25.371 23:00:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:25.371 23:00:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:25.371 23:00:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:25.371 23:00:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:25.371 23:00:44 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:25.371 23:00:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:25.371 23:00:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:25.371 23:00:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:25.371 23:00:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:25.371 23:00:44 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:25.371 23:00:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:25.371 23:00:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:25.371 23:00:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:25.371 23:00:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.371 23:00:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.371 23:00:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:25.371 23:00:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:25.371 23:00:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:25.371 No valid GPT data, bailing 00:05:25.371 23:00:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:25.371 23:00:44 -- scripts/common.sh@394 -- # pt= 00:05:25.371 23:00:44 -- scripts/common.sh@395 -- # return 1 00:05:25.371 23:00:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:25.371 1+0 records in 00:05:25.371 1+0 records out 00:05:25.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00612314 s, 171 MB/s 00:05:25.371 23:00:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.371 23:00:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.371 23:00:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:25.371 23:00:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:25.371 23:00:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:25.630 No valid GPT data, bailing 00:05:25.630 23:00:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:25.630 23:00:44 -- scripts/common.sh@394 -- # pt= 00:05:25.630 23:00:44 -- scripts/common.sh@395 -- # return 1 00:05:25.630 23:00:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:25.630 1+0 records in 00:05:25.630 1+0 records out 00:05:25.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00640223 s, 164 MB/s 00:05:25.630 23:00:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.630 23:00:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.630 23:00:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:25.630 23:00:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:25.630 23:00:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:25.630 No valid GPT data, bailing 00:05:25.630 23:00:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:25.630 23:00:44 -- scripts/common.sh@394 -- # pt= 00:05:25.630 23:00:44 -- scripts/common.sh@395 -- # return 1 00:05:25.630 23:00:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:25.630 1+0 records in 00:05:25.630 1+0 records out 00:05:25.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00643847 s, 163 MB/s 00:05:25.630 23:00:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.630 23:00:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.630 23:00:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:25.630 23:00:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:25.630 23:00:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:25.630 No valid GPT data, bailing 00:05:25.630 23:00:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:25.630 23:00:44 -- scripts/common.sh@394 -- # pt= 00:05:25.630 23:00:44 -- scripts/common.sh@395 -- # return 1 00:05:25.630 23:00:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:25.630 1+0 records in 00:05:25.630 1+0 records out 00:05:25.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621837 s, 169 MB/s 00:05:25.630 23:00:44 -- spdk/autotest.sh@105 -- # sync 00:05:25.891 23:00:45 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:25.891 23:00:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:25.891 23:00:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:28.455 23:00:47 -- spdk/autotest.sh@111 -- # uname -s 00:05:28.455 23:00:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:28.455 23:00:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:28.455 23:00:47 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:29.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.396 Hugepages 00:05:29.396 node hugesize free / total 00:05:29.396 node0 1048576kB 0 / 0 00:05:29.396 node0 2048kB 0 / 0 00:05:29.396 00:05:29.396 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:29.396 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:29.657 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:29.657 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:29.657 23:00:48 -- spdk/autotest.sh@117 -- # uname -s 00:05:29.657 23:00:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:29.657 23:00:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:29.657 23:00:48 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.594 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.594 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.853 23:00:49 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:31.794 23:00:50 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:31.794 23:00:50 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:31.794 23:00:50 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:31.794 23:00:50 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:31.794 23:00:50 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:31.794 23:00:50 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:31.794 23:00:50 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.794 23:00:50 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:31.794 23:00:50 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:31.794 23:00:51 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:31.794 23:00:51 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:31.794 23:00:51 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.367 Waiting for block devices as requested 00:05:32.367 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:32.367 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:32.631 23:00:51 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:32.631 23:00:51 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:32.631 23:00:51 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:32.631 23:00:51 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:32.631 23:00:51 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:32.631 23:00:51 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:32.631 23:00:51 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:32.631 23:00:51 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:32.631 23:00:51 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:32.631 23:00:51 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:32.631 23:00:51 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:32.631 23:00:51 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:32.631 23:00:51 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:32.631 23:00:51 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:32.631 23:00:51 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:32.631 23:00:51 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:32.631 23:00:51 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:32.631 23:00:51 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:32.631 23:00:51 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:32.631 23:00:51 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:32.631 23:00:51 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:32.631 23:00:51 -- common/autotest_common.sh@1541 -- # continue 00:05:32.631 23:00:51 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:32.631 23:00:51 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:32.631 23:00:51 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:32.631 23:00:51 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:32.631 23:00:51 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:32.631 23:00:51 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:32.631 23:00:51 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:32.631 23:00:51 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:32.631 23:00:51 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:32.631 23:00:51 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:32.631 23:00:51 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:32.631 23:00:51 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:32.631 23:00:51 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:32.631 23:00:51 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:32.631 23:00:51 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:32.631 23:00:51 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:32.631 23:00:51 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:32.631 23:00:51 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:32.631 23:00:51 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:32.631 23:00:51 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:32.631 23:00:51 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:32.631 23:00:51 -- common/autotest_common.sh@1541 -- # continue 00:05:32.631 23:00:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:32.631 23:00:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:32.631 23:00:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.631 23:00:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:32.631 23:00:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.631 23:00:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.631 23:00:51 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.570 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.570 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.570 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.831 23:00:52 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:33.831 23:00:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.831 23:00:52 -- common/autotest_common.sh@10 -- # set +x 00:05:33.831 23:00:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:33.831 23:00:53 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:33.831 23:00:53 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:33.831 23:00:53 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:33.831 23:00:53 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:33.831 23:00:53 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:33.831 23:00:53 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:33.831 23:00:53 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:33.831 23:00:53 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:33.831 23:00:53 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:33.831 23:00:53 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.831 23:00:53 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:33.831 23:00:53 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:33.831 23:00:53 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:33.831 23:00:53 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:33.831 23:00:53 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:33.831 23:00:53 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:33.831 23:00:53 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:33.831 23:00:53 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:33.831 23:00:53 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:33.831 23:00:53 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:33.831 23:00:53 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:33.831 23:00:53 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:33.831 23:00:53 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:33.831 23:00:53 -- common/autotest_common.sh@1570 -- # return 0 00:05:33.831 23:00:53 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:33.831 23:00:53 -- common/autotest_common.sh@1578 -- # return 0 00:05:33.831 23:00:53 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:33.831 23:00:53 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:33.831 23:00:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:33.831 23:00:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:33.831 23:00:53 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:33.831 23:00:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.831 23:00:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.831 23:00:53 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:33.831 23:00:53 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:33.831 23:00:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.831 23:00:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.831 23:00:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.831 ************************************ 00:05:33.831 START TEST env 00:05:33.831 ************************************ 00:05:33.831 23:00:53 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:34.093 * Looking for test storage... 00:05:34.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:34.093 23:00:53 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:34.093 23:00:53 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:34.093 23:00:53 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:34.093 23:00:53 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:34.093 23:00:53 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.093 23:00:53 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.093 23:00:53 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.093 23:00:53 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.093 23:00:53 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.093 23:00:53 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.093 23:00:53 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.093 23:00:53 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.093 23:00:53 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.093 23:00:53 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.093 23:00:53 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.093 23:00:53 env -- scripts/common.sh@344 -- # case "$op" in 00:05:34.093 23:00:53 env -- scripts/common.sh@345 -- # : 1 00:05:34.093 23:00:53 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.093 23:00:53 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.093 23:00:53 env -- scripts/common.sh@365 -- # decimal 1 00:05:34.093 23:00:53 env -- scripts/common.sh@353 -- # local d=1 00:05:34.093 23:00:53 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.093 23:00:53 env -- scripts/common.sh@355 -- # echo 1 00:05:34.093 23:00:53 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.093 23:00:53 env -- scripts/common.sh@366 -- # decimal 2 00:05:34.093 23:00:53 env -- scripts/common.sh@353 -- # local d=2 00:05:34.093 23:00:53 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.093 23:00:53 env -- scripts/common.sh@355 -- # echo 2 00:05:34.093 23:00:53 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.093 23:00:53 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.093 23:00:53 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.093 23:00:53 env -- scripts/common.sh@368 -- # return 0 00:05:34.093 23:00:53 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.093 23:00:53 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:34.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.093 --rc genhtml_branch_coverage=1 00:05:34.093 --rc genhtml_function_coverage=1 00:05:34.093 --rc genhtml_legend=1 00:05:34.093 --rc geninfo_all_blocks=1 00:05:34.093 --rc geninfo_unexecuted_blocks=1 00:05:34.093 00:05:34.093 ' 00:05:34.093 23:00:53 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:34.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.093 --rc genhtml_branch_coverage=1 00:05:34.093 --rc genhtml_function_coverage=1 00:05:34.093 --rc genhtml_legend=1 00:05:34.093 --rc geninfo_all_blocks=1 00:05:34.093 --rc geninfo_unexecuted_blocks=1 00:05:34.093 00:05:34.093 ' 00:05:34.093 23:00:53 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:34.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.093 --rc genhtml_branch_coverage=1 00:05:34.093 --rc genhtml_function_coverage=1 00:05:34.093 --rc genhtml_legend=1 00:05:34.093 --rc geninfo_all_blocks=1 00:05:34.093 --rc geninfo_unexecuted_blocks=1 00:05:34.093 00:05:34.093 ' 00:05:34.093 23:00:53 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:34.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.093 --rc genhtml_branch_coverage=1 00:05:34.093 --rc genhtml_function_coverage=1 00:05:34.093 --rc genhtml_legend=1 00:05:34.093 --rc geninfo_all_blocks=1 00:05:34.093 --rc geninfo_unexecuted_blocks=1 00:05:34.093 00:05:34.093 ' 00:05:34.093 23:00:53 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:34.093 23:00:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.093 23:00:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.093 23:00:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.093 ************************************ 00:05:34.093 START TEST env_memory 00:05:34.093 ************************************ 00:05:34.093 23:00:53 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:34.093 00:05:34.093 00:05:34.093 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.093 http://cunit.sourceforge.net/ 00:05:34.093 00:05:34.093 00:05:34.093 Suite: memory 00:05:34.354 Test: alloc and free memory map ...[2024-11-18 23:00:53.473572] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:34.354 passed 00:05:34.354 Test: mem map translation ...[2024-11-18 23:00:53.514212] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:34.354 [2024-11-18 23:00:53.514247] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:34.354 [2024-11-18 23:00:53.514322] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:34.354 [2024-11-18 23:00:53.514341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:34.354 passed 00:05:34.354 Test: mem map registration ...[2024-11-18 23:00:53.579022] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:34.354 [2024-11-18 23:00:53.579061] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:34.354 passed 00:05:34.354 Test: mem map adjacent registrations ...passed 00:05:34.354 00:05:34.355 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.355 suites 1 1 n/a 0 0 00:05:34.355 tests 4 4 4 0 0 00:05:34.355 asserts 152 152 152 0 n/a 00:05:34.355 00:05:34.355 Elapsed time = 0.222 seconds 00:05:34.355 00:05:34.355 ************************************ 00:05:34.355 END TEST env_memory 00:05:34.355 ************************************ 00:05:34.355 real 0m0.268s 00:05:34.355 user 0m0.234s 00:05:34.355 sys 0m0.025s 00:05:34.355 23:00:53 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.355 23:00:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:34.616 23:00:53 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:34.616 23:00:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.616 23:00:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.616 23:00:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.616 ************************************ 00:05:34.616 START TEST env_vtophys 00:05:34.616 ************************************ 00:05:34.616 23:00:53 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:34.616 EAL: lib.eal log level changed from notice to debug 00:05:34.616 EAL: Detected lcore 0 as core 0 on socket 0 00:05:34.616 EAL: Detected lcore 1 as core 0 on socket 0 00:05:34.616 EAL: Detected lcore 2 as core 0 on socket 0 00:05:34.616 EAL: Detected lcore 3 as core 0 on socket 0 00:05:34.616 EAL: Detected lcore 4 as core 0 on socket 0 00:05:34.616 EAL: Detected lcore 5 as core 0 on socket 0 00:05:34.616 EAL: Detected lcore 6 as core 0 on socket 0 00:05:34.616 EAL: Detected lcore 7 as core 0 on socket 0 00:05:34.616 EAL: Detected lcore 8 as core 0 on socket 0 00:05:34.616 EAL: Detected lcore 9 as core 0 on socket 0 00:05:34.616 EAL: Maximum logical cores by configuration: 128 00:05:34.616 EAL: Detected CPU lcores: 10 00:05:34.616 EAL: Detected NUMA nodes: 1 00:05:34.616 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:34.616 EAL: Detected shared linkage of DPDK 00:05:34.616 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:34.616 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:34.616 EAL: Registered [vdev] bus. 00:05:34.616 EAL: bus.vdev log level changed from disabled to notice 00:05:34.616 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:34.616 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:34.616 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:34.616 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:34.616 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:34.616 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:34.616 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:34.616 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:34.616 EAL: No shared files mode enabled, IPC will be disabled 00:05:34.616 EAL: No shared files mode enabled, IPC is disabled 00:05:34.616 EAL: Selected IOVA mode 'PA' 00:05:34.616 EAL: Probing VFIO support... 00:05:34.616 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:34.616 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:34.616 EAL: Ask a virtual area of 0x2e000 bytes 00:05:34.616 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:34.616 EAL: Setting up physically contiguous memory... 00:05:34.616 EAL: Setting maximum number of open files to 524288 00:05:34.616 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:34.616 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:34.616 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.616 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:34.616 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.616 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.616 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:34.616 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:34.616 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.616 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:34.616 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.616 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.616 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:34.616 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:34.616 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.616 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:34.616 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.616 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.616 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:34.616 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:34.616 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.616 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:34.616 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.616 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.616 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:34.616 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:34.616 EAL: Hugepages will be freed exactly as allocated. 00:05:34.616 EAL: No shared files mode enabled, IPC is disabled 00:05:34.616 EAL: No shared files mode enabled, IPC is disabled 00:05:34.616 EAL: TSC frequency is ~2290000 KHz 00:05:34.616 EAL: Main lcore 0 is ready (tid=7fbf83e12a40;cpuset=[0]) 00:05:34.616 EAL: Trying to obtain current memory policy. 00:05:34.616 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.616 EAL: Restoring previous memory policy: 0 00:05:34.616 EAL: request: mp_malloc_sync 00:05:34.616 EAL: No shared files mode enabled, IPC is disabled 00:05:34.617 EAL: Heap on socket 0 was expanded by 2MB 00:05:34.617 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:34.617 EAL: No shared files mode enabled, IPC is disabled 00:05:34.617 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:34.617 EAL: Mem event callback 'spdk:(nil)' registered 00:05:34.617 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:34.617 00:05:34.617 00:05:34.617 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.617 http://cunit.sourceforge.net/ 00:05:34.617 00:05:34.617 00:05:34.617 Suite: components_suite 00:05:35.188 Test: vtophys_malloc_test ...passed 00:05:35.188 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:35.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.188 EAL: Restoring previous memory policy: 4 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was expanded by 4MB 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was shrunk by 4MB 00:05:35.188 EAL: Trying to obtain current memory policy. 00:05:35.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.188 EAL: Restoring previous memory policy: 4 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was expanded by 6MB 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was shrunk by 6MB 00:05:35.188 EAL: Trying to obtain current memory policy. 00:05:35.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.188 EAL: Restoring previous memory policy: 4 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was expanded by 10MB 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was shrunk by 10MB 00:05:35.188 EAL: Trying to obtain current memory policy. 00:05:35.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.188 EAL: Restoring previous memory policy: 4 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was expanded by 18MB 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was shrunk by 18MB 00:05:35.188 EAL: Trying to obtain current memory policy. 00:05:35.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.188 EAL: Restoring previous memory policy: 4 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was expanded by 34MB 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was shrunk by 34MB 00:05:35.188 EAL: Trying to obtain current memory policy. 00:05:35.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.188 EAL: Restoring previous memory policy: 4 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was expanded by 66MB 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was shrunk by 66MB 00:05:35.188 EAL: Trying to obtain current memory policy. 00:05:35.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.188 EAL: Restoring previous memory policy: 4 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was expanded by 130MB 00:05:35.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.188 EAL: request: mp_malloc_sync 00:05:35.188 EAL: No shared files mode enabled, IPC is disabled 00:05:35.188 EAL: Heap on socket 0 was shrunk by 130MB 00:05:35.189 EAL: Trying to obtain current memory policy. 00:05:35.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.189 EAL: Restoring previous memory policy: 4 00:05:35.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.189 EAL: request: mp_malloc_sync 00:05:35.189 EAL: No shared files mode enabled, IPC is disabled 00:05:35.189 EAL: Heap on socket 0 was expanded by 258MB 00:05:35.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.189 EAL: request: mp_malloc_sync 00:05:35.189 EAL: No shared files mode enabled, IPC is disabled 00:05:35.189 EAL: Heap on socket 0 was shrunk by 258MB 00:05:35.189 EAL: Trying to obtain current memory policy. 00:05:35.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.449 EAL: Restoring previous memory policy: 4 00:05:35.449 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.449 EAL: request: mp_malloc_sync 00:05:35.449 EAL: No shared files mode enabled, IPC is disabled 00:05:35.449 EAL: Heap on socket 0 was expanded by 514MB 00:05:35.449 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.449 EAL: request: mp_malloc_sync 00:05:35.449 EAL: No shared files mode enabled, IPC is disabled 00:05:35.449 EAL: Heap on socket 0 was shrunk by 514MB 00:05:35.449 EAL: Trying to obtain current memory policy. 00:05:35.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.709 EAL: Restoring previous memory policy: 4 00:05:35.709 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.709 EAL: request: mp_malloc_sync 00:05:35.709 EAL: No shared files mode enabled, IPC is disabled 00:05:35.709 EAL: Heap on socket 0 was expanded by 1026MB 00:05:35.969 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.969 passed 00:05:35.969 00:05:35.969 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.969 suites 1 1 n/a 0 0 00:05:35.969 tests 2 2 2 0 0 00:05:35.969 asserts 5218 5218 5218 0 n/a 00:05:35.969 00:05:35.969 Elapsed time = 1.332 seconds 00:05:35.969 EAL: request: mp_malloc_sync 00:05:35.969 EAL: No shared files mode enabled, IPC is disabled 00:05:35.969 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:35.969 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.969 EAL: request: mp_malloc_sync 00:05:35.969 EAL: No shared files mode enabled, IPC is disabled 00:05:35.969 EAL: Heap on socket 0 was shrunk by 2MB 00:05:35.969 EAL: No shared files mode enabled, IPC is disabled 00:05:35.969 EAL: No shared files mode enabled, IPC is disabled 00:05:35.969 EAL: No shared files mode enabled, IPC is disabled 00:05:35.969 00:05:35.969 real 0m1.594s 00:05:35.969 user 0m0.773s 00:05:35.969 sys 0m0.686s 00:05:36.230 ************************************ 00:05:36.230 END TEST env_vtophys 00:05:36.230 ************************************ 00:05:36.230 23:00:55 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.230 23:00:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:36.230 23:00:55 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:36.230 23:00:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.230 23:00:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.230 23:00:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.230 ************************************ 00:05:36.230 START TEST env_pci 00:05:36.230 ************************************ 00:05:36.230 23:00:55 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:36.230 00:05:36.230 00:05:36.230 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.230 http://cunit.sourceforge.net/ 00:05:36.230 00:05:36.230 00:05:36.230 Suite: pci 00:05:36.230 Test: pci_hook ...[2024-11-18 23:00:55.448218] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68973 has claimed it 00:05:36.230 EAL: Cannot find device (10000:00:01.0) 00:05:36.230 passed 00:05:36.230 00:05:36.230 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.230 suites 1 1 n/a 0 0 00:05:36.230 tests 1 1 1 0 0 00:05:36.230 asserts 25 25 25 0 n/a 00:05:36.230 00:05:36.230 Elapsed time = 0.008 seconds 00:05:36.230 EAL: Failed to attach device on primary process 00:05:36.230 00:05:36.230 real 0m0.095s 00:05:36.230 user 0m0.039s 00:05:36.230 sys 0m0.056s 00:05:36.230 23:00:55 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.230 ************************************ 00:05:36.230 END TEST env_pci 00:05:36.230 ************************************ 00:05:36.230 23:00:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:36.230 23:00:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:36.230 23:00:55 env -- env/env.sh@15 -- # uname 00:05:36.230 23:00:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:36.230 23:00:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:36.230 23:00:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.230 23:00:55 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:36.230 23:00:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.230 23:00:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.230 ************************************ 00:05:36.230 START TEST env_dpdk_post_init 00:05:36.230 ************************************ 00:05:36.230 23:00:55 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.491 EAL: Detected CPU lcores: 10 00:05:36.491 EAL: Detected NUMA nodes: 1 00:05:36.491 EAL: Detected shared linkage of DPDK 00:05:36.491 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.491 EAL: Selected IOVA mode 'PA' 00:05:36.491 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.491 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:36.491 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:36.491 Starting DPDK initialization... 00:05:36.491 Starting SPDK post initialization... 00:05:36.491 SPDK NVMe probe 00:05:36.491 Attaching to 0000:00:10.0 00:05:36.491 Attaching to 0000:00:11.0 00:05:36.491 Attached to 0000:00:10.0 00:05:36.491 Attached to 0000:00:11.0 00:05:36.491 Cleaning up... 00:05:36.491 00:05:36.491 real 0m0.244s 00:05:36.491 user 0m0.063s 00:05:36.491 sys 0m0.082s 00:05:36.491 23:00:55 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.491 23:00:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.491 ************************************ 00:05:36.491 END TEST env_dpdk_post_init 00:05:36.491 ************************************ 00:05:36.750 23:00:55 env -- env/env.sh@26 -- # uname 00:05:36.750 23:00:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:36.750 23:00:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.750 23:00:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.750 23:00:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.750 23:00:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.750 ************************************ 00:05:36.750 START TEST env_mem_callbacks 00:05:36.750 ************************************ 00:05:36.750 23:00:55 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.750 EAL: Detected CPU lcores: 10 00:05:36.750 EAL: Detected NUMA nodes: 1 00:05:36.750 EAL: Detected shared linkage of DPDK 00:05:36.750 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.750 EAL: Selected IOVA mode 'PA' 00:05:36.750 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.750 00:05:36.750 00:05:36.750 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.750 http://cunit.sourceforge.net/ 00:05:36.750 00:05:36.750 00:05:36.750 Suite: memory 00:05:36.750 Test: test ... 00:05:36.750 register 0x200000200000 2097152 00:05:36.750 malloc 3145728 00:05:36.750 register 0x200000400000 4194304 00:05:36.750 buf 0x200000500000 len 3145728 PASSED 00:05:36.750 malloc 64 00:05:36.750 buf 0x2000004fff40 len 64 PASSED 00:05:36.750 malloc 4194304 00:05:36.750 register 0x200000800000 6291456 00:05:36.750 buf 0x200000a00000 len 4194304 PASSED 00:05:36.750 free 0x200000500000 3145728 00:05:36.750 free 0x2000004fff40 64 00:05:36.750 unregister 0x200000400000 4194304 PASSED 00:05:36.750 free 0x200000a00000 4194304 00:05:36.750 unregister 0x200000800000 6291456 PASSED 00:05:36.750 malloc 8388608 00:05:36.750 register 0x200000400000 10485760 00:05:36.750 buf 0x200000600000 len 8388608 PASSED 00:05:36.750 free 0x200000600000 8388608 00:05:36.750 unregister 0x200000400000 10485760 PASSED 00:05:36.750 passed 00:05:36.750 00:05:36.750 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.750 suites 1 1 n/a 0 0 00:05:36.750 tests 1 1 1 0 0 00:05:36.750 asserts 15 15 15 0 n/a 00:05:36.750 00:05:36.750 Elapsed time = 0.011 seconds 00:05:36.750 00:05:36.750 real 0m0.204s 00:05:36.750 user 0m0.039s 00:05:36.750 sys 0m0.061s 00:05:36.750 23:00:56 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.750 23:00:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:36.750 ************************************ 00:05:36.750 END TEST env_mem_callbacks 00:05:36.750 ************************************ 00:05:37.009 00:05:37.009 real 0m2.988s 00:05:37.009 user 0m1.389s 00:05:37.009 sys 0m1.256s 00:05:37.009 23:00:56 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.009 23:00:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.009 ************************************ 00:05:37.009 END TEST env 00:05:37.009 ************************************ 00:05:37.009 23:00:56 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:37.009 23:00:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.009 23:00:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.009 23:00:56 -- common/autotest_common.sh@10 -- # set +x 00:05:37.009 ************************************ 00:05:37.009 START TEST rpc 00:05:37.009 ************************************ 00:05:37.009 23:00:56 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:37.009 * Looking for test storage... 00:05:37.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:37.009 23:00:56 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:37.009 23:00:56 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:37.009 23:00:56 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:37.269 23:00:56 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:37.269 23:00:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.269 23:00:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.269 23:00:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.269 23:00:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.269 23:00:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.269 23:00:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.269 23:00:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.269 23:00:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.269 23:00:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.269 23:00:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.269 23:00:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.269 23:00:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:37.269 23:00:56 rpc -- scripts/common.sh@345 -- # : 1 00:05:37.269 23:00:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.269 23:00:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.269 23:00:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:37.269 23:00:56 rpc -- scripts/common.sh@353 -- # local d=1 00:05:37.269 23:00:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.269 23:00:56 rpc -- scripts/common.sh@355 -- # echo 1 00:05:37.269 23:00:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.269 23:00:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:37.269 23:00:56 rpc -- scripts/common.sh@353 -- # local d=2 00:05:37.269 23:00:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.269 23:00:56 rpc -- scripts/common.sh@355 -- # echo 2 00:05:37.269 23:00:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.269 23:00:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.270 23:00:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.270 23:00:56 rpc -- scripts/common.sh@368 -- # return 0 00:05:37.270 23:00:56 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.270 23:00:56 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:37.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.270 --rc genhtml_branch_coverage=1 00:05:37.270 --rc genhtml_function_coverage=1 00:05:37.270 --rc genhtml_legend=1 00:05:37.270 --rc geninfo_all_blocks=1 00:05:37.270 --rc geninfo_unexecuted_blocks=1 00:05:37.270 00:05:37.270 ' 00:05:37.270 23:00:56 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:37.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.270 --rc genhtml_branch_coverage=1 00:05:37.270 --rc genhtml_function_coverage=1 00:05:37.270 --rc genhtml_legend=1 00:05:37.270 --rc geninfo_all_blocks=1 00:05:37.270 --rc geninfo_unexecuted_blocks=1 00:05:37.270 00:05:37.270 ' 00:05:37.270 23:00:56 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:37.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.270 --rc genhtml_branch_coverage=1 00:05:37.270 --rc genhtml_function_coverage=1 00:05:37.270 --rc genhtml_legend=1 00:05:37.270 --rc geninfo_all_blocks=1 00:05:37.270 --rc geninfo_unexecuted_blocks=1 00:05:37.270 00:05:37.270 ' 00:05:37.270 23:00:56 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:37.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.270 --rc genhtml_branch_coverage=1 00:05:37.270 --rc genhtml_function_coverage=1 00:05:37.270 --rc genhtml_legend=1 00:05:37.270 --rc geninfo_all_blocks=1 00:05:37.270 --rc geninfo_unexecuted_blocks=1 00:05:37.270 00:05:37.270 ' 00:05:37.270 23:00:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69100 00:05:37.270 23:00:56 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:37.270 23:00:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.270 23:00:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69100 00:05:37.270 23:00:56 rpc -- common/autotest_common.sh@831 -- # '[' -z 69100 ']' 00:05:37.270 23:00:56 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.270 23:00:56 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.270 23:00:56 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.270 23:00:56 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.270 23:00:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.270 [2024-11-18 23:00:56.560301] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:37.270 [2024-11-18 23:00:56.560581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69100 ] 00:05:37.530 [2024-11-18 23:00:56.724666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.530 [2024-11-18 23:00:56.770356] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:37.530 [2024-11-18 23:00:56.770503] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69100' to capture a snapshot of events at runtime. 00:05:37.530 [2024-11-18 23:00:56.770529] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:37.530 [2024-11-18 23:00:56.770544] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:37.530 [2024-11-18 23:00:56.770570] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69100 for offline analysis/debug. 00:05:37.530 [2024-11-18 23:00:56.770632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.101 23:00:57 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.101 23:00:57 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:38.101 23:00:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:38.101 23:00:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:38.101 23:00:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:38.101 23:00:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:38.101 23:00:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.101 23:00:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.101 23:00:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 ************************************ 00:05:38.101 START TEST rpc_integrity 00:05:38.101 ************************************ 00:05:38.101 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:38.101 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.101 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.101 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.101 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.101 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.101 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.101 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.101 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.101 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.101 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:38.101 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.101 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.101 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.101 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.101 { 00:05:38.101 "name": "Malloc0", 00:05:38.101 "aliases": [ 00:05:38.101 "957259e8-f72a-465d-a5bb-9dfcafe7cba9" 00:05:38.101 ], 00:05:38.101 "product_name": "Malloc disk", 00:05:38.101 "block_size": 512, 00:05:38.101 "num_blocks": 16384, 00:05:38.101 "uuid": "957259e8-f72a-465d-a5bb-9dfcafe7cba9", 00:05:38.101 "assigned_rate_limits": { 00:05:38.101 "rw_ios_per_sec": 0, 00:05:38.101 "rw_mbytes_per_sec": 0, 00:05:38.101 "r_mbytes_per_sec": 0, 00:05:38.101 "w_mbytes_per_sec": 0 00:05:38.101 }, 00:05:38.101 "claimed": false, 00:05:38.101 "zoned": false, 00:05:38.101 "supported_io_types": { 00:05:38.101 "read": true, 00:05:38.101 "write": true, 00:05:38.101 "unmap": true, 00:05:38.101 "flush": true, 00:05:38.101 "reset": true, 00:05:38.101 "nvme_admin": false, 00:05:38.101 "nvme_io": false, 00:05:38.101 "nvme_io_md": false, 00:05:38.101 "write_zeroes": true, 00:05:38.101 "zcopy": true, 00:05:38.101 "get_zone_info": false, 00:05:38.101 "zone_management": false, 00:05:38.101 "zone_append": false, 00:05:38.101 "compare": false, 00:05:38.101 "compare_and_write": false, 00:05:38.101 "abort": true, 00:05:38.101 "seek_hole": false, 00:05:38.101 "seek_data": false, 00:05:38.101 "copy": true, 00:05:38.101 "nvme_iov_md": false 00:05:38.101 }, 00:05:38.101 "memory_domains": [ 00:05:38.101 { 00:05:38.101 "dma_device_id": "system", 00:05:38.101 "dma_device_type": 1 00:05:38.101 }, 00:05:38.101 { 00:05:38.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.101 "dma_device_type": 2 00:05:38.101 } 00:05:38.101 ], 00:05:38.101 "driver_specific": {} 00:05:38.101 } 00:05:38.101 ]' 00:05:38.101 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.364 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.364 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:38.364 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.364 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.364 [2024-11-18 23:00:57.519884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:38.364 [2024-11-18 23:00:57.519950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.364 [2024-11-18 23:00:57.519979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:38.364 [2024-11-18 23:00:57.519988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.364 [2024-11-18 23:00:57.522244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.364 [2024-11-18 23:00:57.522298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.364 Passthru0 00:05:38.364 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.364 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.364 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.364 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.364 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.364 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.364 { 00:05:38.364 "name": "Malloc0", 00:05:38.364 "aliases": [ 00:05:38.364 "957259e8-f72a-465d-a5bb-9dfcafe7cba9" 00:05:38.364 ], 00:05:38.364 "product_name": "Malloc disk", 00:05:38.364 "block_size": 512, 00:05:38.364 "num_blocks": 16384, 00:05:38.364 "uuid": "957259e8-f72a-465d-a5bb-9dfcafe7cba9", 00:05:38.364 "assigned_rate_limits": { 00:05:38.364 "rw_ios_per_sec": 0, 00:05:38.364 "rw_mbytes_per_sec": 0, 00:05:38.364 "r_mbytes_per_sec": 0, 00:05:38.364 "w_mbytes_per_sec": 0 00:05:38.364 }, 00:05:38.364 "claimed": true, 00:05:38.364 "claim_type": "exclusive_write", 00:05:38.364 "zoned": false, 00:05:38.364 "supported_io_types": { 00:05:38.364 "read": true, 00:05:38.364 "write": true, 00:05:38.364 "unmap": true, 00:05:38.364 "flush": true, 00:05:38.364 "reset": true, 00:05:38.364 "nvme_admin": false, 00:05:38.364 "nvme_io": false, 00:05:38.364 "nvme_io_md": false, 00:05:38.364 "write_zeroes": true, 00:05:38.364 "zcopy": true, 00:05:38.364 "get_zone_info": false, 00:05:38.364 "zone_management": false, 00:05:38.364 "zone_append": false, 00:05:38.364 "compare": false, 00:05:38.364 "compare_and_write": false, 00:05:38.364 "abort": true, 00:05:38.364 "seek_hole": false, 00:05:38.364 "seek_data": false, 00:05:38.364 "copy": true, 00:05:38.364 "nvme_iov_md": false 00:05:38.364 }, 00:05:38.364 "memory_domains": [ 00:05:38.364 { 00:05:38.364 "dma_device_id": "system", 00:05:38.364 "dma_device_type": 1 00:05:38.364 }, 00:05:38.364 { 00:05:38.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.365 "dma_device_type": 2 00:05:38.365 } 00:05:38.365 ], 00:05:38.365 "driver_specific": {} 00:05:38.365 }, 00:05:38.365 { 00:05:38.365 "name": "Passthru0", 00:05:38.365 "aliases": [ 00:05:38.365 "25c8d084-051e-5f6a-9f15-6f795303ba51" 00:05:38.365 ], 00:05:38.365 "product_name": "passthru", 00:05:38.365 "block_size": 512, 00:05:38.365 "num_blocks": 16384, 00:05:38.365 "uuid": "25c8d084-051e-5f6a-9f15-6f795303ba51", 00:05:38.365 "assigned_rate_limits": { 00:05:38.365 "rw_ios_per_sec": 0, 00:05:38.365 "rw_mbytes_per_sec": 0, 00:05:38.365 "r_mbytes_per_sec": 0, 00:05:38.365 "w_mbytes_per_sec": 0 00:05:38.365 }, 00:05:38.365 "claimed": false, 00:05:38.365 "zoned": false, 00:05:38.365 "supported_io_types": { 00:05:38.365 "read": true, 00:05:38.365 "write": true, 00:05:38.365 "unmap": true, 00:05:38.365 "flush": true, 00:05:38.365 "reset": true, 00:05:38.365 "nvme_admin": false, 00:05:38.365 "nvme_io": false, 00:05:38.365 "nvme_io_md": false, 00:05:38.365 "write_zeroes": true, 00:05:38.365 "zcopy": true, 00:05:38.365 "get_zone_info": false, 00:05:38.365 "zone_management": false, 00:05:38.365 "zone_append": false, 00:05:38.365 "compare": false, 00:05:38.365 "compare_and_write": false, 00:05:38.365 "abort": true, 00:05:38.365 "seek_hole": false, 00:05:38.365 "seek_data": false, 00:05:38.365 "copy": true, 00:05:38.365 "nvme_iov_md": false 00:05:38.365 }, 00:05:38.365 "memory_domains": [ 00:05:38.365 { 00:05:38.365 "dma_device_id": "system", 00:05:38.365 "dma_device_type": 1 00:05:38.365 }, 00:05:38.365 { 00:05:38.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.365 "dma_device_type": 2 00:05:38.365 } 00:05:38.365 ], 00:05:38.365 "driver_specific": { 00:05:38.365 "passthru": { 00:05:38.365 "name": "Passthru0", 00:05:38.365 "base_bdev_name": "Malloc0" 00:05:38.365 } 00:05:38.365 } 00:05:38.365 } 00:05:38.365 ]' 00:05:38.365 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.365 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.365 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.365 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.365 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.365 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.365 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:38.365 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.365 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.365 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.365 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.365 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.365 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.365 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.365 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.365 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.365 23:00:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.365 00:05:38.365 real 0m0.311s 00:05:38.365 user 0m0.190s 00:05:38.365 sys 0m0.051s 00:05:38.365 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.365 23:00:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.365 ************************************ 00:05:38.365 END TEST rpc_integrity 00:05:38.365 ************************************ 00:05:38.365 23:00:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:38.365 23:00:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.365 23:00:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.365 23:00:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.625 ************************************ 00:05:38.625 START TEST rpc_plugins 00:05:38.625 ************************************ 00:05:38.625 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:38.625 23:00:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:38.625 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.625 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.625 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.625 23:00:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:38.625 23:00:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:38.625 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.625 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.625 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.625 23:00:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:38.625 { 00:05:38.625 "name": "Malloc1", 00:05:38.625 "aliases": [ 00:05:38.625 "966ea355-0803-4626-a829-aff140297bef" 00:05:38.625 ], 00:05:38.625 "product_name": "Malloc disk", 00:05:38.625 "block_size": 4096, 00:05:38.625 "num_blocks": 256, 00:05:38.625 "uuid": "966ea355-0803-4626-a829-aff140297bef", 00:05:38.625 "assigned_rate_limits": { 00:05:38.625 "rw_ios_per_sec": 0, 00:05:38.625 "rw_mbytes_per_sec": 0, 00:05:38.625 "r_mbytes_per_sec": 0, 00:05:38.625 "w_mbytes_per_sec": 0 00:05:38.625 }, 00:05:38.625 "claimed": false, 00:05:38.625 "zoned": false, 00:05:38.625 "supported_io_types": { 00:05:38.625 "read": true, 00:05:38.625 "write": true, 00:05:38.625 "unmap": true, 00:05:38.625 "flush": true, 00:05:38.625 "reset": true, 00:05:38.625 "nvme_admin": false, 00:05:38.625 "nvme_io": false, 00:05:38.625 "nvme_io_md": false, 00:05:38.625 "write_zeroes": true, 00:05:38.625 "zcopy": true, 00:05:38.625 "get_zone_info": false, 00:05:38.625 "zone_management": false, 00:05:38.625 "zone_append": false, 00:05:38.625 "compare": false, 00:05:38.625 "compare_and_write": false, 00:05:38.625 "abort": true, 00:05:38.625 "seek_hole": false, 00:05:38.625 "seek_data": false, 00:05:38.625 "copy": true, 00:05:38.625 "nvme_iov_md": false 00:05:38.625 }, 00:05:38.625 "memory_domains": [ 00:05:38.625 { 00:05:38.625 "dma_device_id": "system", 00:05:38.625 "dma_device_type": 1 00:05:38.625 }, 00:05:38.625 { 00:05:38.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.625 "dma_device_type": 2 00:05:38.625 } 00:05:38.625 ], 00:05:38.625 "driver_specific": {} 00:05:38.625 } 00:05:38.625 ]' 00:05:38.625 23:00:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:38.625 23:00:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:38.625 23:00:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:38.625 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.626 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.626 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.626 23:00:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:38.626 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.626 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.626 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.626 23:00:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:38.626 23:00:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:38.626 ************************************ 00:05:38.626 END TEST rpc_plugins 00:05:38.626 ************************************ 00:05:38.626 23:00:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:38.626 00:05:38.626 real 0m0.166s 00:05:38.626 user 0m0.103s 00:05:38.626 sys 0m0.023s 00:05:38.626 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.626 23:00:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.626 23:00:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:38.626 23:00:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.626 23:00:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.626 23:00:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.626 ************************************ 00:05:38.626 START TEST rpc_trace_cmd_test 00:05:38.626 ************************************ 00:05:38.626 23:00:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:38.626 23:00:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:38.626 23:00:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:38.626 23:00:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.626 23:00:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:38.885 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69100", 00:05:38.885 "tpoint_group_mask": "0x8", 00:05:38.885 "iscsi_conn": { 00:05:38.885 "mask": "0x2", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "scsi": { 00:05:38.885 "mask": "0x4", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "bdev": { 00:05:38.885 "mask": "0x8", 00:05:38.885 "tpoint_mask": "0xffffffffffffffff" 00:05:38.885 }, 00:05:38.885 "nvmf_rdma": { 00:05:38.885 "mask": "0x10", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "nvmf_tcp": { 00:05:38.885 "mask": "0x20", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "ftl": { 00:05:38.885 "mask": "0x40", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "blobfs": { 00:05:38.885 "mask": "0x80", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "dsa": { 00:05:38.885 "mask": "0x200", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "thread": { 00:05:38.885 "mask": "0x400", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "nvme_pcie": { 00:05:38.885 "mask": "0x800", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "iaa": { 00:05:38.885 "mask": "0x1000", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "nvme_tcp": { 00:05:38.885 "mask": "0x2000", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "bdev_nvme": { 00:05:38.885 "mask": "0x4000", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "sock": { 00:05:38.885 "mask": "0x8000", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "blob": { 00:05:38.885 "mask": "0x10000", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 }, 00:05:38.885 "bdev_raid": { 00:05:38.885 "mask": "0x20000", 00:05:38.885 "tpoint_mask": "0x0" 00:05:38.885 } 00:05:38.885 }' 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:38.885 ************************************ 00:05:38.885 END TEST rpc_trace_cmd_test 00:05:38.885 ************************************ 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:38.885 00:05:38.885 real 0m0.224s 00:05:38.885 user 0m0.171s 00:05:38.885 sys 0m0.038s 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.885 23:00:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.885 23:00:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:38.885 23:00:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:38.885 23:00:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:38.885 23:00:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.885 23:00:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.885 23:00:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.885 ************************************ 00:05:38.885 START TEST rpc_daemon_integrity 00:05:38.885 ************************************ 00:05:38.885 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.145 { 00:05:39.145 "name": "Malloc2", 00:05:39.145 "aliases": [ 00:05:39.145 "838c4c57-f44a-43c6-9285-df1a628ddaa9" 00:05:39.145 ], 00:05:39.145 "product_name": "Malloc disk", 00:05:39.145 "block_size": 512, 00:05:39.145 "num_blocks": 16384, 00:05:39.145 "uuid": "838c4c57-f44a-43c6-9285-df1a628ddaa9", 00:05:39.145 "assigned_rate_limits": { 00:05:39.145 "rw_ios_per_sec": 0, 00:05:39.145 "rw_mbytes_per_sec": 0, 00:05:39.145 "r_mbytes_per_sec": 0, 00:05:39.145 "w_mbytes_per_sec": 0 00:05:39.145 }, 00:05:39.145 "claimed": false, 00:05:39.145 "zoned": false, 00:05:39.145 "supported_io_types": { 00:05:39.145 "read": true, 00:05:39.145 "write": true, 00:05:39.145 "unmap": true, 00:05:39.145 "flush": true, 00:05:39.145 "reset": true, 00:05:39.145 "nvme_admin": false, 00:05:39.145 "nvme_io": false, 00:05:39.145 "nvme_io_md": false, 00:05:39.145 "write_zeroes": true, 00:05:39.145 "zcopy": true, 00:05:39.145 "get_zone_info": false, 00:05:39.145 "zone_management": false, 00:05:39.145 "zone_append": false, 00:05:39.145 "compare": false, 00:05:39.145 "compare_and_write": false, 00:05:39.145 "abort": true, 00:05:39.145 "seek_hole": false, 00:05:39.145 "seek_data": false, 00:05:39.145 "copy": true, 00:05:39.145 "nvme_iov_md": false 00:05:39.145 }, 00:05:39.145 "memory_domains": [ 00:05:39.145 { 00:05:39.145 "dma_device_id": "system", 00:05:39.145 "dma_device_type": 1 00:05:39.145 }, 00:05:39.145 { 00:05:39.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.145 "dma_device_type": 2 00:05:39.145 } 00:05:39.145 ], 00:05:39.145 "driver_specific": {} 00:05:39.145 } 00:05:39.145 ]' 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.145 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.145 [2024-11-18 23:00:58.415317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:39.145 [2024-11-18 23:00:58.415366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.146 [2024-11-18 23:00:58.415386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:39.146 [2024-11-18 23:00:58.415395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.146 [2024-11-18 23:00:58.417568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.146 [2024-11-18 23:00:58.417607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.146 Passthru0 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.146 { 00:05:39.146 "name": "Malloc2", 00:05:39.146 "aliases": [ 00:05:39.146 "838c4c57-f44a-43c6-9285-df1a628ddaa9" 00:05:39.146 ], 00:05:39.146 "product_name": "Malloc disk", 00:05:39.146 "block_size": 512, 00:05:39.146 "num_blocks": 16384, 00:05:39.146 "uuid": "838c4c57-f44a-43c6-9285-df1a628ddaa9", 00:05:39.146 "assigned_rate_limits": { 00:05:39.146 "rw_ios_per_sec": 0, 00:05:39.146 "rw_mbytes_per_sec": 0, 00:05:39.146 "r_mbytes_per_sec": 0, 00:05:39.146 "w_mbytes_per_sec": 0 00:05:39.146 }, 00:05:39.146 "claimed": true, 00:05:39.146 "claim_type": "exclusive_write", 00:05:39.146 "zoned": false, 00:05:39.146 "supported_io_types": { 00:05:39.146 "read": true, 00:05:39.146 "write": true, 00:05:39.146 "unmap": true, 00:05:39.146 "flush": true, 00:05:39.146 "reset": true, 00:05:39.146 "nvme_admin": false, 00:05:39.146 "nvme_io": false, 00:05:39.146 "nvme_io_md": false, 00:05:39.146 "write_zeroes": true, 00:05:39.146 "zcopy": true, 00:05:39.146 "get_zone_info": false, 00:05:39.146 "zone_management": false, 00:05:39.146 "zone_append": false, 00:05:39.146 "compare": false, 00:05:39.146 "compare_and_write": false, 00:05:39.146 "abort": true, 00:05:39.146 "seek_hole": false, 00:05:39.146 "seek_data": false, 00:05:39.146 "copy": true, 00:05:39.146 "nvme_iov_md": false 00:05:39.146 }, 00:05:39.146 "memory_domains": [ 00:05:39.146 { 00:05:39.146 "dma_device_id": "system", 00:05:39.146 "dma_device_type": 1 00:05:39.146 }, 00:05:39.146 { 00:05:39.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.146 "dma_device_type": 2 00:05:39.146 } 00:05:39.146 ], 00:05:39.146 "driver_specific": {} 00:05:39.146 }, 00:05:39.146 { 00:05:39.146 "name": "Passthru0", 00:05:39.146 "aliases": [ 00:05:39.146 "d19e2b7e-4960-5b16-95a3-1fdb7d110798" 00:05:39.146 ], 00:05:39.146 "product_name": "passthru", 00:05:39.146 "block_size": 512, 00:05:39.146 "num_blocks": 16384, 00:05:39.146 "uuid": "d19e2b7e-4960-5b16-95a3-1fdb7d110798", 00:05:39.146 "assigned_rate_limits": { 00:05:39.146 "rw_ios_per_sec": 0, 00:05:39.146 "rw_mbytes_per_sec": 0, 00:05:39.146 "r_mbytes_per_sec": 0, 00:05:39.146 "w_mbytes_per_sec": 0 00:05:39.146 }, 00:05:39.146 "claimed": false, 00:05:39.146 "zoned": false, 00:05:39.146 "supported_io_types": { 00:05:39.146 "read": true, 00:05:39.146 "write": true, 00:05:39.146 "unmap": true, 00:05:39.146 "flush": true, 00:05:39.146 "reset": true, 00:05:39.146 "nvme_admin": false, 00:05:39.146 "nvme_io": false, 00:05:39.146 "nvme_io_md": false, 00:05:39.146 "write_zeroes": true, 00:05:39.146 "zcopy": true, 00:05:39.146 "get_zone_info": false, 00:05:39.146 "zone_management": false, 00:05:39.146 "zone_append": false, 00:05:39.146 "compare": false, 00:05:39.146 "compare_and_write": false, 00:05:39.146 "abort": true, 00:05:39.146 "seek_hole": false, 00:05:39.146 "seek_data": false, 00:05:39.146 "copy": true, 00:05:39.146 "nvme_iov_md": false 00:05:39.146 }, 00:05:39.146 "memory_domains": [ 00:05:39.146 { 00:05:39.146 "dma_device_id": "system", 00:05:39.146 "dma_device_type": 1 00:05:39.146 }, 00:05:39.146 { 00:05:39.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.146 "dma_device_type": 2 00:05:39.146 } 00:05:39.146 ], 00:05:39.146 "driver_specific": { 00:05:39.146 "passthru": { 00:05:39.146 "name": "Passthru0", 00:05:39.146 "base_bdev_name": "Malloc2" 00:05:39.146 } 00:05:39.146 } 00:05:39.146 } 00:05:39.146 ]' 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.146 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.406 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.406 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.406 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.406 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.406 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.406 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.406 ************************************ 00:05:39.406 END TEST rpc_daemon_integrity 00:05:39.406 ************************************ 00:05:39.406 23:00:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.406 00:05:39.406 real 0m0.316s 00:05:39.406 user 0m0.187s 00:05:39.406 sys 0m0.055s 00:05:39.406 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.406 23:00:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.406 23:00:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:39.406 23:00:58 rpc -- rpc/rpc.sh@84 -- # killprocess 69100 00:05:39.406 23:00:58 rpc -- common/autotest_common.sh@950 -- # '[' -z 69100 ']' 00:05:39.406 23:00:58 rpc -- common/autotest_common.sh@954 -- # kill -0 69100 00:05:39.406 23:00:58 rpc -- common/autotest_common.sh@955 -- # uname 00:05:39.406 23:00:58 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.406 23:00:58 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69100 00:05:39.406 killing process with pid 69100 00:05:39.406 23:00:58 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.406 23:00:58 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.406 23:00:58 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69100' 00:05:39.406 23:00:58 rpc -- common/autotest_common.sh@969 -- # kill 69100 00:05:39.406 23:00:58 rpc -- common/autotest_common.sh@974 -- # wait 69100 00:05:40.018 ************************************ 00:05:40.018 END TEST rpc 00:05:40.018 ************************************ 00:05:40.018 00:05:40.018 real 0m2.827s 00:05:40.018 user 0m3.350s 00:05:40.018 sys 0m0.865s 00:05:40.018 23:00:59 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.018 23:00:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.018 23:00:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:40.018 23:00:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.018 23:00:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.018 23:00:59 -- common/autotest_common.sh@10 -- # set +x 00:05:40.018 ************************************ 00:05:40.018 START TEST skip_rpc 00:05:40.018 ************************************ 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:40.018 * Looking for test storage... 00:05:40.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.018 23:00:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.018 --rc genhtml_branch_coverage=1 00:05:40.018 --rc genhtml_function_coverage=1 00:05:40.018 --rc genhtml_legend=1 00:05:40.018 --rc geninfo_all_blocks=1 00:05:40.018 --rc geninfo_unexecuted_blocks=1 00:05:40.018 00:05:40.018 ' 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.018 --rc genhtml_branch_coverage=1 00:05:40.018 --rc genhtml_function_coverage=1 00:05:40.018 --rc genhtml_legend=1 00:05:40.018 --rc geninfo_all_blocks=1 00:05:40.018 --rc geninfo_unexecuted_blocks=1 00:05:40.018 00:05:40.018 ' 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.018 --rc genhtml_branch_coverage=1 00:05:40.018 --rc genhtml_function_coverage=1 00:05:40.018 --rc genhtml_legend=1 00:05:40.018 --rc geninfo_all_blocks=1 00:05:40.018 --rc geninfo_unexecuted_blocks=1 00:05:40.018 00:05:40.018 ' 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.018 --rc genhtml_branch_coverage=1 00:05:40.018 --rc genhtml_function_coverage=1 00:05:40.018 --rc genhtml_legend=1 00:05:40.018 --rc geninfo_all_blocks=1 00:05:40.018 --rc geninfo_unexecuted_blocks=1 00:05:40.018 00:05:40.018 ' 00:05:40.018 23:00:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.018 23:00:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:40.018 23:00:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.018 23:00:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.018 ************************************ 00:05:40.018 START TEST skip_rpc 00:05:40.018 ************************************ 00:05:40.018 23:00:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:40.018 23:00:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69302 00:05:40.018 23:00:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.018 23:00:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:40.018 23:00:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:40.292 [2024-11-18 23:00:59.457864] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:40.292 [2024-11-18 23:00:59.458064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69302 ] 00:05:40.292 [2024-11-18 23:00:59.616845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.553 [2024-11-18 23:00:59.675731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.829 23:01:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:45.829 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:45.829 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69302 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69302 ']' 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69302 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69302 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69302' 00:05:45.830 killing process with pid 69302 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69302 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69302 00:05:45.830 00:05:45.830 real 0m5.450s 00:05:45.830 user 0m5.032s 00:05:45.830 ************************************ 00:05:45.830 END TEST skip_rpc 00:05:45.830 ************************************ 00:05:45.830 sys 0m0.342s 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.830 23:01:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.830 23:01:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:45.830 23:01:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.830 23:01:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.830 23:01:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.830 ************************************ 00:05:45.830 START TEST skip_rpc_with_json 00:05:45.830 ************************************ 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69389 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69389 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69389 ']' 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.830 23:01:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.830 [2024-11-18 23:01:04.970730] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:45.830 [2024-11-18 23:01:04.970946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69389 ] 00:05:45.830 [2024-11-18 23:01:05.130824] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.830 [2024-11-18 23:01:05.178573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.772 [2024-11-18 23:01:05.788367] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:46.772 request: 00:05:46.772 { 00:05:46.772 "trtype": "tcp", 00:05:46.772 "method": "nvmf_get_transports", 00:05:46.772 "req_id": 1 00:05:46.772 } 00:05:46.772 Got JSON-RPC error response 00:05:46.772 response: 00:05:46.772 { 00:05:46.772 "code": -19, 00:05:46.772 "message": "No such device" 00:05:46.772 } 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.772 [2024-11-18 23:01:05.804448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.772 23:01:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:46.772 { 00:05:46.772 "subsystems": [ 00:05:46.772 { 00:05:46.772 "subsystem": "fsdev", 00:05:46.772 "config": [ 00:05:46.772 { 00:05:46.772 "method": "fsdev_set_opts", 00:05:46.772 "params": { 00:05:46.772 "fsdev_io_pool_size": 65535, 00:05:46.772 "fsdev_io_cache_size": 256 00:05:46.772 } 00:05:46.772 } 00:05:46.772 ] 00:05:46.772 }, 00:05:46.772 { 00:05:46.772 "subsystem": "keyring", 00:05:46.772 "config": [] 00:05:46.772 }, 00:05:46.772 { 00:05:46.772 "subsystem": "iobuf", 00:05:46.772 "config": [ 00:05:46.772 { 00:05:46.772 "method": "iobuf_set_options", 00:05:46.772 "params": { 00:05:46.772 "small_pool_count": 8192, 00:05:46.772 "large_pool_count": 1024, 00:05:46.772 "small_bufsize": 8192, 00:05:46.772 "large_bufsize": 135168 00:05:46.772 } 00:05:46.772 } 00:05:46.772 ] 00:05:46.772 }, 00:05:46.772 { 00:05:46.772 "subsystem": "sock", 00:05:46.772 "config": [ 00:05:46.772 { 00:05:46.772 "method": "sock_set_default_impl", 00:05:46.772 "params": { 00:05:46.772 "impl_name": "posix" 00:05:46.772 } 00:05:46.772 }, 00:05:46.772 { 00:05:46.772 "method": "sock_impl_set_options", 00:05:46.772 "params": { 00:05:46.772 "impl_name": "ssl", 00:05:46.772 "recv_buf_size": 4096, 00:05:46.772 "send_buf_size": 4096, 00:05:46.772 "enable_recv_pipe": true, 00:05:46.772 "enable_quickack": false, 00:05:46.772 "enable_placement_id": 0, 00:05:46.772 "enable_zerocopy_send_server": true, 00:05:46.772 "enable_zerocopy_send_client": false, 00:05:46.772 "zerocopy_threshold": 0, 00:05:46.772 "tls_version": 0, 00:05:46.772 "enable_ktls": false 00:05:46.772 } 00:05:46.772 }, 00:05:46.772 { 00:05:46.772 "method": "sock_impl_set_options", 00:05:46.772 "params": { 00:05:46.772 "impl_name": "posix", 00:05:46.772 "recv_buf_size": 2097152, 00:05:46.772 "send_buf_size": 2097152, 00:05:46.772 "enable_recv_pipe": true, 00:05:46.772 "enable_quickack": false, 00:05:46.772 "enable_placement_id": 0, 00:05:46.772 "enable_zerocopy_send_server": true, 00:05:46.772 "enable_zerocopy_send_client": false, 00:05:46.772 "zerocopy_threshold": 0, 00:05:46.772 "tls_version": 0, 00:05:46.772 "enable_ktls": false 00:05:46.772 } 00:05:46.772 } 00:05:46.772 ] 00:05:46.772 }, 00:05:46.772 { 00:05:46.772 "subsystem": "vmd", 00:05:46.772 "config": [] 00:05:46.772 }, 00:05:46.772 { 00:05:46.772 "subsystem": "accel", 00:05:46.772 "config": [ 00:05:46.772 { 00:05:46.772 "method": "accel_set_options", 00:05:46.772 "params": { 00:05:46.772 "small_cache_size": 128, 00:05:46.772 "large_cache_size": 16, 00:05:46.772 "task_count": 2048, 00:05:46.772 "sequence_count": 2048, 00:05:46.772 "buf_count": 2048 00:05:46.772 } 00:05:46.772 } 00:05:46.772 ] 00:05:46.772 }, 00:05:46.772 { 00:05:46.772 "subsystem": "bdev", 00:05:46.772 "config": [ 00:05:46.772 { 00:05:46.772 "method": "bdev_set_options", 00:05:46.773 "params": { 00:05:46.773 "bdev_io_pool_size": 65535, 00:05:46.773 "bdev_io_cache_size": 256, 00:05:46.773 "bdev_auto_examine": true, 00:05:46.773 "iobuf_small_cache_size": 128, 00:05:46.773 "iobuf_large_cache_size": 16 00:05:46.773 } 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "method": "bdev_raid_set_options", 00:05:46.773 "params": { 00:05:46.773 "process_window_size_kb": 1024, 00:05:46.773 "process_max_bandwidth_mb_sec": 0 00:05:46.773 } 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "method": "bdev_iscsi_set_options", 00:05:46.773 "params": { 00:05:46.773 "timeout_sec": 30 00:05:46.773 } 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "method": "bdev_nvme_set_options", 00:05:46.773 "params": { 00:05:46.773 "action_on_timeout": "none", 00:05:46.773 "timeout_us": 0, 00:05:46.773 "timeout_admin_us": 0, 00:05:46.773 "keep_alive_timeout_ms": 10000, 00:05:46.773 "arbitration_burst": 0, 00:05:46.773 "low_priority_weight": 0, 00:05:46.773 "medium_priority_weight": 0, 00:05:46.773 "high_priority_weight": 0, 00:05:46.773 "nvme_adminq_poll_period_us": 10000, 00:05:46.773 "nvme_ioq_poll_period_us": 0, 00:05:46.773 "io_queue_requests": 0, 00:05:46.773 "delay_cmd_submit": true, 00:05:46.773 "transport_retry_count": 4, 00:05:46.773 "bdev_retry_count": 3, 00:05:46.773 "transport_ack_timeout": 0, 00:05:46.773 "ctrlr_loss_timeout_sec": 0, 00:05:46.773 "reconnect_delay_sec": 0, 00:05:46.773 "fast_io_fail_timeout_sec": 0, 00:05:46.773 "disable_auto_failback": false, 00:05:46.773 "generate_uuids": false, 00:05:46.773 "transport_tos": 0, 00:05:46.773 "nvme_error_stat": false, 00:05:46.773 "rdma_srq_size": 0, 00:05:46.773 "io_path_stat": false, 00:05:46.773 "allow_accel_sequence": false, 00:05:46.773 "rdma_max_cq_size": 0, 00:05:46.773 "rdma_cm_event_timeout_ms": 0, 00:05:46.773 "dhchap_digests": [ 00:05:46.773 "sha256", 00:05:46.773 "sha384", 00:05:46.773 "sha512" 00:05:46.773 ], 00:05:46.773 "dhchap_dhgroups": [ 00:05:46.773 "null", 00:05:46.773 "ffdhe2048", 00:05:46.773 "ffdhe3072", 00:05:46.773 "ffdhe4096", 00:05:46.773 "ffdhe6144", 00:05:46.773 "ffdhe8192" 00:05:46.773 ] 00:05:46.773 } 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "method": "bdev_nvme_set_hotplug", 00:05:46.773 "params": { 00:05:46.773 "period_us": 100000, 00:05:46.773 "enable": false 00:05:46.773 } 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "method": "bdev_wait_for_examine" 00:05:46.773 } 00:05:46.773 ] 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "subsystem": "scsi", 00:05:46.773 "config": null 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "subsystem": "scheduler", 00:05:46.773 "config": [ 00:05:46.773 { 00:05:46.773 "method": "framework_set_scheduler", 00:05:46.773 "params": { 00:05:46.773 "name": "static" 00:05:46.773 } 00:05:46.773 } 00:05:46.773 ] 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "subsystem": "vhost_scsi", 00:05:46.773 "config": [] 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "subsystem": "vhost_blk", 00:05:46.773 "config": [] 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "subsystem": "ublk", 00:05:46.773 "config": [] 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "subsystem": "nbd", 00:05:46.773 "config": [] 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "subsystem": "nvmf", 00:05:46.773 "config": [ 00:05:46.773 { 00:05:46.773 "method": "nvmf_set_config", 00:05:46.773 "params": { 00:05:46.773 "discovery_filter": "match_any", 00:05:46.773 "admin_cmd_passthru": { 00:05:46.773 "identify_ctrlr": false 00:05:46.773 }, 00:05:46.773 "dhchap_digests": [ 00:05:46.773 "sha256", 00:05:46.773 "sha384", 00:05:46.773 "sha512" 00:05:46.773 ], 00:05:46.773 "dhchap_dhgroups": [ 00:05:46.773 "null", 00:05:46.773 "ffdhe2048", 00:05:46.773 "ffdhe3072", 00:05:46.773 "ffdhe4096", 00:05:46.773 "ffdhe6144", 00:05:46.773 "ffdhe8192" 00:05:46.773 ] 00:05:46.773 } 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "method": "nvmf_set_max_subsystems", 00:05:46.773 "params": { 00:05:46.773 "max_subsystems": 1024 00:05:46.773 } 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "method": "nvmf_set_crdt", 00:05:46.773 "params": { 00:05:46.773 "crdt1": 0, 00:05:46.773 "crdt2": 0, 00:05:46.773 "crdt3": 0 00:05:46.773 } 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "method": "nvmf_create_transport", 00:05:46.773 "params": { 00:05:46.773 "trtype": "TCP", 00:05:46.773 "max_queue_depth": 128, 00:05:46.773 "max_io_qpairs_per_ctrlr": 127, 00:05:46.773 "in_capsule_data_size": 4096, 00:05:46.773 "max_io_size": 131072, 00:05:46.773 "io_unit_size": 131072, 00:05:46.773 "max_aq_depth": 128, 00:05:46.773 "num_shared_buffers": 511, 00:05:46.773 "buf_cache_size": 4294967295, 00:05:46.773 "dif_insert_or_strip": false, 00:05:46.773 "zcopy": false, 00:05:46.773 "c2h_success": true, 00:05:46.773 "sock_priority": 0, 00:05:46.773 "abort_timeout_sec": 1, 00:05:46.773 "ack_timeout": 0, 00:05:46.773 "data_wr_pool_size": 0 00:05:46.773 } 00:05:46.773 } 00:05:46.773 ] 00:05:46.773 }, 00:05:46.773 { 00:05:46.773 "subsystem": "iscsi", 00:05:46.773 "config": [ 00:05:46.773 { 00:05:46.773 "method": "iscsi_set_options", 00:05:46.773 "params": { 00:05:46.773 "node_base": "iqn.2016-06.io.spdk", 00:05:46.773 "max_sessions": 128, 00:05:46.773 "max_connections_per_session": 2, 00:05:46.773 "max_queue_depth": 64, 00:05:46.773 "default_time2wait": 2, 00:05:46.773 "default_time2retain": 20, 00:05:46.773 "first_burst_length": 8192, 00:05:46.773 "immediate_data": true, 00:05:46.773 "allow_duplicated_isid": false, 00:05:46.773 "error_recovery_level": 0, 00:05:46.773 "nop_timeout": 60, 00:05:46.773 "nop_in_interval": 30, 00:05:46.773 "disable_chap": false, 00:05:46.773 "require_chap": false, 00:05:46.773 "mutual_chap": false, 00:05:46.773 "chap_group": 0, 00:05:46.773 "max_large_datain_per_connection": 64, 00:05:46.773 "max_r2t_per_connection": 4, 00:05:46.773 "pdu_pool_size": 36864, 00:05:46.773 "immediate_data_pool_size": 16384, 00:05:46.773 "data_out_pool_size": 2048 00:05:46.773 } 00:05:46.773 } 00:05:46.773 ] 00:05:46.773 } 00:05:46.773 ] 00:05:46.773 } 00:05:46.773 23:01:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:46.773 23:01:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69389 00:05:46.773 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69389 ']' 00:05:46.773 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69389 00:05:46.773 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:46.773 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.773 23:01:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69389 00:05:46.773 killing process with pid 69389 00:05:46.773 23:01:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.773 23:01:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.773 23:01:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69389' 00:05:46.773 23:01:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69389 00:05:46.773 23:01:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69389 00:05:47.344 23:01:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69418 00:05:47.344 23:01:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.344 23:01:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69418 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69418 ']' 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69418 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69418 00:05:52.640 killing process with pid 69418 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69418' 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69418 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69418 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:52.640 23:01:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:52.641 ************************************ 00:05:52.641 END TEST skip_rpc_with_json 00:05:52.641 ************************************ 00:05:52.641 00:05:52.641 real 0m6.976s 00:05:52.641 user 0m6.509s 00:05:52.641 sys 0m0.741s 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.641 23:01:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:52.641 23:01:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.641 23:01:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.641 23:01:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.641 ************************************ 00:05:52.641 START TEST skip_rpc_with_delay 00:05:52.641 ************************************ 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:52.641 23:01:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.901 [2024-11-18 23:01:12.022893] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:52.901 [2024-11-18 23:01:12.023019] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:52.901 23:01:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:52.901 ************************************ 00:05:52.901 END TEST skip_rpc_with_delay 00:05:52.901 ************************************ 00:05:52.901 23:01:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.901 23:01:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.901 23:01:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.901 00:05:52.901 real 0m0.158s 00:05:52.901 user 0m0.081s 00:05:52.901 sys 0m0.075s 00:05:52.901 23:01:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.901 23:01:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:52.901 23:01:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:52.901 23:01:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:52.901 23:01:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:52.901 23:01:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.901 23:01:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.901 23:01:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.901 ************************************ 00:05:52.901 START TEST exit_on_failed_rpc_init 00:05:52.901 ************************************ 00:05:52.901 23:01:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:52.901 23:01:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69528 00:05:52.901 23:01:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.901 23:01:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69528 00:05:52.901 23:01:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69528 ']' 00:05:52.901 23:01:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.901 23:01:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.901 23:01:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.901 23:01:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.901 23:01:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.901 [2024-11-18 23:01:12.253248] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:52.901 [2024-11-18 23:01:12.253382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69528 ] 00:05:53.169 [2024-11-18 23:01:12.413795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.169 [2024-11-18 23:01:12.459678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:53.739 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.999 [2024-11-18 23:01:13.158366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:53.999 [2024-11-18 23:01:13.158510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69542 ] 00:05:53.999 [2024-11-18 23:01:13.315655] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.999 [2024-11-18 23:01:13.362482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.999 [2024-11-18 23:01:13.362595] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:53.999 [2024-11-18 23:01:13.362617] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:53.999 [2024-11-18 23:01:13.362630] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69528 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69528 ']' 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69528 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69528 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.260 killing process with pid 69528 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69528' 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69528 00:05:54.260 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69528 00:05:54.850 00:05:54.850 real 0m1.752s 00:05:54.850 user 0m1.884s 00:05:54.850 sys 0m0.505s 00:05:54.850 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.850 23:01:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.850 ************************************ 00:05:54.850 END TEST exit_on_failed_rpc_init 00:05:54.850 ************************************ 00:05:54.850 23:01:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.850 00:05:54.850 real 0m14.846s 00:05:54.850 user 0m13.729s 00:05:54.850 sys 0m1.962s 00:05:54.850 23:01:13 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.850 23:01:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.850 ************************************ 00:05:54.850 END TEST skip_rpc 00:05:54.850 ************************************ 00:05:54.850 23:01:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:54.850 23:01:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.850 23:01:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.850 23:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:54.850 ************************************ 00:05:54.850 START TEST rpc_client 00:05:54.850 ************************************ 00:05:54.850 23:01:14 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:54.850 * Looking for test storage... 00:05:54.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:54.850 23:01:14 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:54.850 23:01:14 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:54.850 23:01:14 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.110 23:01:14 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:55.110 23:01:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.111 23:01:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:55.111 23:01:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:55.111 23:01:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.111 23:01:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:55.111 23:01:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.111 23:01:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.111 23:01:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.111 23:01:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:55.111 23:01:14 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.111 23:01:14 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.111 --rc genhtml_branch_coverage=1 00:05:55.111 --rc genhtml_function_coverage=1 00:05:55.111 --rc genhtml_legend=1 00:05:55.111 --rc geninfo_all_blocks=1 00:05:55.111 --rc geninfo_unexecuted_blocks=1 00:05:55.111 00:05:55.111 ' 00:05:55.111 23:01:14 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.111 --rc genhtml_branch_coverage=1 00:05:55.111 --rc genhtml_function_coverage=1 00:05:55.111 --rc genhtml_legend=1 00:05:55.111 --rc geninfo_all_blocks=1 00:05:55.111 --rc geninfo_unexecuted_blocks=1 00:05:55.111 00:05:55.111 ' 00:05:55.111 23:01:14 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.111 --rc genhtml_branch_coverage=1 00:05:55.111 --rc genhtml_function_coverage=1 00:05:55.111 --rc genhtml_legend=1 00:05:55.111 --rc geninfo_all_blocks=1 00:05:55.111 --rc geninfo_unexecuted_blocks=1 00:05:55.111 00:05:55.111 ' 00:05:55.111 23:01:14 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.111 --rc genhtml_branch_coverage=1 00:05:55.111 --rc genhtml_function_coverage=1 00:05:55.111 --rc genhtml_legend=1 00:05:55.111 --rc geninfo_all_blocks=1 00:05:55.111 --rc geninfo_unexecuted_blocks=1 00:05:55.111 00:05:55.111 ' 00:05:55.111 23:01:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:55.111 OK 00:05:55.111 23:01:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:55.111 00:05:55.111 real 0m0.294s 00:05:55.111 user 0m0.164s 00:05:55.111 sys 0m0.147s 00:05:55.111 23:01:14 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.111 23:01:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:55.111 ************************************ 00:05:55.111 END TEST rpc_client 00:05:55.111 ************************************ 00:05:55.111 23:01:14 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:55.111 23:01:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.111 23:01:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.111 23:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:55.111 ************************************ 00:05:55.111 START TEST json_config 00:05:55.111 ************************************ 00:05:55.111 23:01:14 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:55.111 23:01:14 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.111 23:01:14 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.111 23:01:14 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.380 23:01:14 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.380 23:01:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.380 23:01:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.380 23:01:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.380 23:01:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.380 23:01:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.380 23:01:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.380 23:01:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.380 23:01:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.380 23:01:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.380 23:01:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.380 23:01:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.380 23:01:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:55.380 23:01:14 json_config -- scripts/common.sh@345 -- # : 1 00:05:55.380 23:01:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.380 23:01:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.380 23:01:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:55.380 23:01:14 json_config -- scripts/common.sh@353 -- # local d=1 00:05:55.380 23:01:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.380 23:01:14 json_config -- scripts/common.sh@355 -- # echo 1 00:05:55.380 23:01:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.380 23:01:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:55.380 23:01:14 json_config -- scripts/common.sh@353 -- # local d=2 00:05:55.380 23:01:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.380 23:01:14 json_config -- scripts/common.sh@355 -- # echo 2 00:05:55.380 23:01:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.380 23:01:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.380 23:01:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.380 23:01:14 json_config -- scripts/common.sh@368 -- # return 0 00:05:55.380 23:01:14 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.380 23:01:14 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.380 --rc genhtml_branch_coverage=1 00:05:55.380 --rc genhtml_function_coverage=1 00:05:55.380 --rc genhtml_legend=1 00:05:55.380 --rc geninfo_all_blocks=1 00:05:55.380 --rc geninfo_unexecuted_blocks=1 00:05:55.380 00:05:55.380 ' 00:05:55.380 23:01:14 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.380 --rc genhtml_branch_coverage=1 00:05:55.380 --rc genhtml_function_coverage=1 00:05:55.380 --rc genhtml_legend=1 00:05:55.380 --rc geninfo_all_blocks=1 00:05:55.380 --rc geninfo_unexecuted_blocks=1 00:05:55.380 00:05:55.380 ' 00:05:55.380 23:01:14 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.380 --rc genhtml_branch_coverage=1 00:05:55.380 --rc genhtml_function_coverage=1 00:05:55.380 --rc genhtml_legend=1 00:05:55.380 --rc geninfo_all_blocks=1 00:05:55.380 --rc geninfo_unexecuted_blocks=1 00:05:55.380 00:05:55.380 ' 00:05:55.380 23:01:14 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.380 --rc genhtml_branch_coverage=1 00:05:55.380 --rc genhtml_function_coverage=1 00:05:55.380 --rc genhtml_legend=1 00:05:55.380 --rc geninfo_all_blocks=1 00:05:55.380 --rc geninfo_unexecuted_blocks=1 00:05:55.380 00:05:55.380 ' 00:05:55.380 23:01:14 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6da4a535-f76d-49b4-b931-740a439f424b 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6da4a535-f76d-49b4-b931-740a439f424b 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.380 23:01:14 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:55.381 23:01:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.381 23:01:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.381 23:01:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.381 23:01:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.381 23:01:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.381 23:01:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.381 23:01:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.381 23:01:14 json_config -- paths/export.sh@5 -- # export PATH 00:05:55.381 23:01:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.381 23:01:14 json_config -- nvmf/common.sh@51 -- # : 0 00:05:55.381 23:01:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.381 23:01:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.381 23:01:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.381 23:01:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.381 23:01:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.381 23:01:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.381 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.381 23:01:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.381 23:01:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.381 23:01:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.381 23:01:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:55.381 23:01:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:55.381 23:01:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:55.381 23:01:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:55.381 23:01:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:55.381 WARNING: No tests are enabled so not running JSON configuration tests 00:05:55.381 23:01:14 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:55.381 23:01:14 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:55.381 00:05:55.381 real 0m0.226s 00:05:55.381 user 0m0.141s 00:05:55.381 sys 0m0.093s 00:05:55.381 23:01:14 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.381 23:01:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.381 ************************************ 00:05:55.381 END TEST json_config 00:05:55.381 ************************************ 00:05:55.381 23:01:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:55.381 23:01:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.381 23:01:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.381 23:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:55.381 ************************************ 00:05:55.381 START TEST json_config_extra_key 00:05:55.381 ************************************ 00:05:55.381 23:01:14 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:55.640 23:01:14 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.641 23:01:14 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.641 23:01:14 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.641 23:01:14 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:55.641 23:01:14 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.641 23:01:14 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.641 --rc genhtml_branch_coverage=1 00:05:55.641 --rc genhtml_function_coverage=1 00:05:55.641 --rc genhtml_legend=1 00:05:55.641 --rc geninfo_all_blocks=1 00:05:55.641 --rc geninfo_unexecuted_blocks=1 00:05:55.641 00:05:55.641 ' 00:05:55.641 23:01:14 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.641 --rc genhtml_branch_coverage=1 00:05:55.641 --rc genhtml_function_coverage=1 00:05:55.641 --rc genhtml_legend=1 00:05:55.641 --rc geninfo_all_blocks=1 00:05:55.641 --rc geninfo_unexecuted_blocks=1 00:05:55.641 00:05:55.641 ' 00:05:55.641 23:01:14 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.641 --rc genhtml_branch_coverage=1 00:05:55.641 --rc genhtml_function_coverage=1 00:05:55.641 --rc genhtml_legend=1 00:05:55.641 --rc geninfo_all_blocks=1 00:05:55.641 --rc geninfo_unexecuted_blocks=1 00:05:55.641 00:05:55.641 ' 00:05:55.641 23:01:14 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.641 --rc genhtml_branch_coverage=1 00:05:55.641 --rc genhtml_function_coverage=1 00:05:55.641 --rc genhtml_legend=1 00:05:55.641 --rc geninfo_all_blocks=1 00:05:55.641 --rc geninfo_unexecuted_blocks=1 00:05:55.641 00:05:55.641 ' 00:05:55.641 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6da4a535-f76d-49b4-b931-740a439f424b 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6da4a535-f76d-49b4-b931-740a439f424b 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.641 23:01:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.641 23:01:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.641 23:01:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.641 23:01:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.641 23:01:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:55.641 23:01:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.641 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.641 23:01:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.641 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:55.641 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:55.641 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:55.641 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:55.641 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:55.642 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:55.642 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:55.642 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:55.642 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:55.642 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:55.642 INFO: launching applications... 00:05:55.642 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:55.642 23:01:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:55.642 23:01:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:55.642 23:01:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:55.642 23:01:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:55.642 23:01:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:55.642 23:01:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:55.642 23:01:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.642 23:01:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.642 23:01:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69730 00:05:55.642 Waiting for target to run... 00:05:55.642 23:01:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:55.642 23:01:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69730 /var/tmp/spdk_tgt.sock 00:05:55.642 23:01:14 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69730 ']' 00:05:55.642 23:01:14 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:55.642 23:01:14 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.642 23:01:14 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.642 23:01:14 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.642 23:01:14 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.642 23:01:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:55.902 [2024-11-18 23:01:15.025534] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:55.902 [2024-11-18 23:01:15.025727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69730 ] 00:05:56.161 [2024-11-18 23:01:15.428532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.161 [2024-11-18 23:01:15.458699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.730 23:01:15 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.730 23:01:15 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:56.730 00:05:56.730 23:01:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:56.730 INFO: shutting down applications... 00:05:56.730 23:01:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:56.730 23:01:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:56.730 23:01:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:56.730 23:01:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:56.730 23:01:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69730 ]] 00:05:56.730 23:01:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69730 00:05:56.730 23:01:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:56.730 23:01:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.730 23:01:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69730 00:05:56.730 23:01:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:56.989 23:01:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:56.989 23:01:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.989 23:01:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69730 00:05:56.989 23:01:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:56.989 23:01:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:56.989 23:01:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:56.989 SPDK target shutdown done 00:05:56.989 23:01:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:56.989 Success 00:05:56.989 23:01:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:56.989 00:05:56.989 real 0m1.665s 00:05:56.989 user 0m1.358s 00:05:56.989 sys 0m0.514s 00:05:56.989 23:01:16 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.989 23:01:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:56.989 ************************************ 00:05:56.989 END TEST json_config_extra_key 00:05:56.989 ************************************ 00:05:57.250 23:01:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.250 23:01:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.250 23:01:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.250 23:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.250 ************************************ 00:05:57.250 START TEST alias_rpc 00:05:57.250 ************************************ 00:05:57.250 23:01:16 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.250 * Looking for test storage... 00:05:57.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:57.250 23:01:16 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:57.250 23:01:16 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:57.250 23:01:16 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:57.250 23:01:16 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:57.250 23:01:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.511 23:01:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:57.511 23:01:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:57.511 23:01:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.511 23:01:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:57.511 23:01:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.511 23:01:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.511 23:01:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.511 23:01:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:57.511 23:01:16 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.511 23:01:16 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.511 --rc genhtml_branch_coverage=1 00:05:57.511 --rc genhtml_function_coverage=1 00:05:57.511 --rc genhtml_legend=1 00:05:57.511 --rc geninfo_all_blocks=1 00:05:57.511 --rc geninfo_unexecuted_blocks=1 00:05:57.511 00:05:57.511 ' 00:05:57.511 23:01:16 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.511 --rc genhtml_branch_coverage=1 00:05:57.511 --rc genhtml_function_coverage=1 00:05:57.511 --rc genhtml_legend=1 00:05:57.511 --rc geninfo_all_blocks=1 00:05:57.511 --rc geninfo_unexecuted_blocks=1 00:05:57.511 00:05:57.511 ' 00:05:57.511 23:01:16 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.511 --rc genhtml_branch_coverage=1 00:05:57.511 --rc genhtml_function_coverage=1 00:05:57.511 --rc genhtml_legend=1 00:05:57.511 --rc geninfo_all_blocks=1 00:05:57.511 --rc geninfo_unexecuted_blocks=1 00:05:57.511 00:05:57.511 ' 00:05:57.511 23:01:16 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.511 --rc genhtml_branch_coverage=1 00:05:57.511 --rc genhtml_function_coverage=1 00:05:57.511 --rc genhtml_legend=1 00:05:57.511 --rc geninfo_all_blocks=1 00:05:57.511 --rc geninfo_unexecuted_blocks=1 00:05:57.511 00:05:57.511 ' 00:05:57.511 23:01:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:57.511 23:01:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69809 00:05:57.511 23:01:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.511 23:01:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69809 00:05:57.511 23:01:16 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69809 ']' 00:05:57.511 23:01:16 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.511 23:01:16 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.511 23:01:16 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.511 23:01:16 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.511 23:01:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.511 [2024-11-18 23:01:16.722685] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:57.511 [2024-11-18 23:01:16.722911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69809 ] 00:05:57.511 [2024-11-18 23:01:16.882409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.771 [2024-11-18 23:01:16.927068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.338 23:01:17 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.338 23:01:17 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.338 23:01:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:58.618 23:01:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69809 00:05:58.619 23:01:17 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69809 ']' 00:05:58.619 23:01:17 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69809 00:05:58.619 23:01:17 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:58.619 23:01:17 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.619 23:01:17 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69809 00:05:58.619 killing process with pid 69809 00:05:58.619 23:01:17 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.619 23:01:17 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.619 23:01:17 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69809' 00:05:58.619 23:01:17 alias_rpc -- common/autotest_common.sh@969 -- # kill 69809 00:05:58.619 23:01:17 alias_rpc -- common/autotest_common.sh@974 -- # wait 69809 00:05:58.888 00:05:58.888 real 0m1.736s 00:05:58.888 user 0m1.701s 00:05:58.888 sys 0m0.525s 00:05:58.888 23:01:18 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.888 23:01:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.888 ************************************ 00:05:58.888 END TEST alias_rpc 00:05:58.888 ************************************ 00:05:58.888 23:01:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:58.888 23:01:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:58.888 23:01:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.888 23:01:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.888 23:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:58.888 ************************************ 00:05:58.888 START TEST spdkcli_tcp 00:05:58.888 ************************************ 00:05:58.888 23:01:18 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:59.147 * Looking for test storage... 00:05:59.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:59.147 23:01:18 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.147 23:01:18 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.147 23:01:18 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.147 23:01:18 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.147 23:01:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:59.147 23:01:18 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.147 23:01:18 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.147 --rc genhtml_branch_coverage=1 00:05:59.147 --rc genhtml_function_coverage=1 00:05:59.147 --rc genhtml_legend=1 00:05:59.147 --rc geninfo_all_blocks=1 00:05:59.147 --rc geninfo_unexecuted_blocks=1 00:05:59.147 00:05:59.147 ' 00:05:59.147 23:01:18 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.147 --rc genhtml_branch_coverage=1 00:05:59.147 --rc genhtml_function_coverage=1 00:05:59.147 --rc genhtml_legend=1 00:05:59.147 --rc geninfo_all_blocks=1 00:05:59.147 --rc geninfo_unexecuted_blocks=1 00:05:59.147 00:05:59.147 ' 00:05:59.147 23:01:18 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.147 --rc genhtml_branch_coverage=1 00:05:59.147 --rc genhtml_function_coverage=1 00:05:59.147 --rc genhtml_legend=1 00:05:59.147 --rc geninfo_all_blocks=1 00:05:59.147 --rc geninfo_unexecuted_blocks=1 00:05:59.147 00:05:59.147 ' 00:05:59.147 23:01:18 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.147 --rc genhtml_branch_coverage=1 00:05:59.148 --rc genhtml_function_coverage=1 00:05:59.148 --rc genhtml_legend=1 00:05:59.148 --rc geninfo_all_blocks=1 00:05:59.148 --rc geninfo_unexecuted_blocks=1 00:05:59.148 00:05:59.148 ' 00:05:59.148 23:01:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:59.148 23:01:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:59.148 23:01:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:59.148 23:01:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:59.148 23:01:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:59.148 23:01:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:59.148 23:01:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:59.148 23:01:18 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:59.148 23:01:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.148 23:01:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69887 00:05:59.148 23:01:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:59.148 23:01:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69887 00:05:59.148 23:01:18 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69887 ']' 00:05:59.148 23:01:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.148 23:01:18 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.148 23:01:18 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.148 23:01:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.148 23:01:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.409 [2024-11-18 23:01:18.541500] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:59.409 [2024-11-18 23:01:18.541621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69887 ] 00:05:59.409 [2024-11-18 23:01:18.702516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.409 [2024-11-18 23:01:18.747789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.409 [2024-11-18 23:01:18.747888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.977 23:01:19 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.977 23:01:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:59.977 23:01:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:59.977 23:01:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69900 00:05:59.977 23:01:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:00.236 [ 00:06:00.236 "bdev_malloc_delete", 00:06:00.236 "bdev_malloc_create", 00:06:00.236 "bdev_null_resize", 00:06:00.236 "bdev_null_delete", 00:06:00.236 "bdev_null_create", 00:06:00.236 "bdev_nvme_cuse_unregister", 00:06:00.236 "bdev_nvme_cuse_register", 00:06:00.236 "bdev_opal_new_user", 00:06:00.236 "bdev_opal_set_lock_state", 00:06:00.236 "bdev_opal_delete", 00:06:00.236 "bdev_opal_get_info", 00:06:00.236 "bdev_opal_create", 00:06:00.236 "bdev_nvme_opal_revert", 00:06:00.236 "bdev_nvme_opal_init", 00:06:00.236 "bdev_nvme_send_cmd", 00:06:00.236 "bdev_nvme_set_keys", 00:06:00.236 "bdev_nvme_get_path_iostat", 00:06:00.236 "bdev_nvme_get_mdns_discovery_info", 00:06:00.236 "bdev_nvme_stop_mdns_discovery", 00:06:00.236 "bdev_nvme_start_mdns_discovery", 00:06:00.236 "bdev_nvme_set_multipath_policy", 00:06:00.236 "bdev_nvme_set_preferred_path", 00:06:00.236 "bdev_nvme_get_io_paths", 00:06:00.236 "bdev_nvme_remove_error_injection", 00:06:00.236 "bdev_nvme_add_error_injection", 00:06:00.236 "bdev_nvme_get_discovery_info", 00:06:00.236 "bdev_nvme_stop_discovery", 00:06:00.236 "bdev_nvme_start_discovery", 00:06:00.236 "bdev_nvme_get_controller_health_info", 00:06:00.236 "bdev_nvme_disable_controller", 00:06:00.236 "bdev_nvme_enable_controller", 00:06:00.236 "bdev_nvme_reset_controller", 00:06:00.236 "bdev_nvme_get_transport_statistics", 00:06:00.236 "bdev_nvme_apply_firmware", 00:06:00.236 "bdev_nvme_detach_controller", 00:06:00.236 "bdev_nvme_get_controllers", 00:06:00.236 "bdev_nvme_attach_controller", 00:06:00.236 "bdev_nvme_set_hotplug", 00:06:00.236 "bdev_nvme_set_options", 00:06:00.236 "bdev_passthru_delete", 00:06:00.236 "bdev_passthru_create", 00:06:00.236 "bdev_lvol_set_parent_bdev", 00:06:00.236 "bdev_lvol_set_parent", 00:06:00.236 "bdev_lvol_check_shallow_copy", 00:06:00.236 "bdev_lvol_start_shallow_copy", 00:06:00.236 "bdev_lvol_grow_lvstore", 00:06:00.236 "bdev_lvol_get_lvols", 00:06:00.236 "bdev_lvol_get_lvstores", 00:06:00.236 "bdev_lvol_delete", 00:06:00.236 "bdev_lvol_set_read_only", 00:06:00.236 "bdev_lvol_resize", 00:06:00.236 "bdev_lvol_decouple_parent", 00:06:00.236 "bdev_lvol_inflate", 00:06:00.236 "bdev_lvol_rename", 00:06:00.236 "bdev_lvol_clone_bdev", 00:06:00.236 "bdev_lvol_clone", 00:06:00.236 "bdev_lvol_snapshot", 00:06:00.236 "bdev_lvol_create", 00:06:00.236 "bdev_lvol_delete_lvstore", 00:06:00.236 "bdev_lvol_rename_lvstore", 00:06:00.236 "bdev_lvol_create_lvstore", 00:06:00.236 "bdev_raid_set_options", 00:06:00.236 "bdev_raid_remove_base_bdev", 00:06:00.236 "bdev_raid_add_base_bdev", 00:06:00.236 "bdev_raid_delete", 00:06:00.236 "bdev_raid_create", 00:06:00.236 "bdev_raid_get_bdevs", 00:06:00.236 "bdev_error_inject_error", 00:06:00.236 "bdev_error_delete", 00:06:00.236 "bdev_error_create", 00:06:00.236 "bdev_split_delete", 00:06:00.236 "bdev_split_create", 00:06:00.236 "bdev_delay_delete", 00:06:00.236 "bdev_delay_create", 00:06:00.236 "bdev_delay_update_latency", 00:06:00.236 "bdev_zone_block_delete", 00:06:00.236 "bdev_zone_block_create", 00:06:00.236 "blobfs_create", 00:06:00.236 "blobfs_detect", 00:06:00.236 "blobfs_set_cache_size", 00:06:00.236 "bdev_aio_delete", 00:06:00.236 "bdev_aio_rescan", 00:06:00.236 "bdev_aio_create", 00:06:00.236 "bdev_ftl_set_property", 00:06:00.236 "bdev_ftl_get_properties", 00:06:00.236 "bdev_ftl_get_stats", 00:06:00.236 "bdev_ftl_unmap", 00:06:00.236 "bdev_ftl_unload", 00:06:00.236 "bdev_ftl_delete", 00:06:00.236 "bdev_ftl_load", 00:06:00.236 "bdev_ftl_create", 00:06:00.236 "bdev_virtio_attach_controller", 00:06:00.236 "bdev_virtio_scsi_get_devices", 00:06:00.236 "bdev_virtio_detach_controller", 00:06:00.236 "bdev_virtio_blk_set_hotplug", 00:06:00.236 "bdev_iscsi_delete", 00:06:00.236 "bdev_iscsi_create", 00:06:00.236 "bdev_iscsi_set_options", 00:06:00.236 "accel_error_inject_error", 00:06:00.236 "ioat_scan_accel_module", 00:06:00.236 "dsa_scan_accel_module", 00:06:00.236 "iaa_scan_accel_module", 00:06:00.236 "keyring_file_remove_key", 00:06:00.236 "keyring_file_add_key", 00:06:00.236 "keyring_linux_set_options", 00:06:00.236 "fsdev_aio_delete", 00:06:00.236 "fsdev_aio_create", 00:06:00.236 "iscsi_get_histogram", 00:06:00.236 "iscsi_enable_histogram", 00:06:00.236 "iscsi_set_options", 00:06:00.236 "iscsi_get_auth_groups", 00:06:00.236 "iscsi_auth_group_remove_secret", 00:06:00.236 "iscsi_auth_group_add_secret", 00:06:00.236 "iscsi_delete_auth_group", 00:06:00.236 "iscsi_create_auth_group", 00:06:00.236 "iscsi_set_discovery_auth", 00:06:00.236 "iscsi_get_options", 00:06:00.236 "iscsi_target_node_request_logout", 00:06:00.236 "iscsi_target_node_set_redirect", 00:06:00.236 "iscsi_target_node_set_auth", 00:06:00.236 "iscsi_target_node_add_lun", 00:06:00.236 "iscsi_get_stats", 00:06:00.236 "iscsi_get_connections", 00:06:00.236 "iscsi_portal_group_set_auth", 00:06:00.236 "iscsi_start_portal_group", 00:06:00.236 "iscsi_delete_portal_group", 00:06:00.236 "iscsi_create_portal_group", 00:06:00.236 "iscsi_get_portal_groups", 00:06:00.236 "iscsi_delete_target_node", 00:06:00.236 "iscsi_target_node_remove_pg_ig_maps", 00:06:00.236 "iscsi_target_node_add_pg_ig_maps", 00:06:00.236 "iscsi_create_target_node", 00:06:00.236 "iscsi_get_target_nodes", 00:06:00.236 "iscsi_delete_initiator_group", 00:06:00.236 "iscsi_initiator_group_remove_initiators", 00:06:00.236 "iscsi_initiator_group_add_initiators", 00:06:00.236 "iscsi_create_initiator_group", 00:06:00.236 "iscsi_get_initiator_groups", 00:06:00.236 "nvmf_set_crdt", 00:06:00.236 "nvmf_set_config", 00:06:00.236 "nvmf_set_max_subsystems", 00:06:00.236 "nvmf_stop_mdns_prr", 00:06:00.237 "nvmf_publish_mdns_prr", 00:06:00.237 "nvmf_subsystem_get_listeners", 00:06:00.237 "nvmf_subsystem_get_qpairs", 00:06:00.237 "nvmf_subsystem_get_controllers", 00:06:00.237 "nvmf_get_stats", 00:06:00.237 "nvmf_get_transports", 00:06:00.237 "nvmf_create_transport", 00:06:00.237 "nvmf_get_targets", 00:06:00.237 "nvmf_delete_target", 00:06:00.237 "nvmf_create_target", 00:06:00.237 "nvmf_subsystem_allow_any_host", 00:06:00.237 "nvmf_subsystem_set_keys", 00:06:00.237 "nvmf_subsystem_remove_host", 00:06:00.237 "nvmf_subsystem_add_host", 00:06:00.237 "nvmf_ns_remove_host", 00:06:00.237 "nvmf_ns_add_host", 00:06:00.237 "nvmf_subsystem_remove_ns", 00:06:00.237 "nvmf_subsystem_set_ns_ana_group", 00:06:00.237 "nvmf_subsystem_add_ns", 00:06:00.237 "nvmf_subsystem_listener_set_ana_state", 00:06:00.237 "nvmf_discovery_get_referrals", 00:06:00.237 "nvmf_discovery_remove_referral", 00:06:00.237 "nvmf_discovery_add_referral", 00:06:00.237 "nvmf_subsystem_remove_listener", 00:06:00.237 "nvmf_subsystem_add_listener", 00:06:00.237 "nvmf_delete_subsystem", 00:06:00.237 "nvmf_create_subsystem", 00:06:00.237 "nvmf_get_subsystems", 00:06:00.237 "env_dpdk_get_mem_stats", 00:06:00.237 "nbd_get_disks", 00:06:00.237 "nbd_stop_disk", 00:06:00.237 "nbd_start_disk", 00:06:00.237 "ublk_recover_disk", 00:06:00.237 "ublk_get_disks", 00:06:00.237 "ublk_stop_disk", 00:06:00.237 "ublk_start_disk", 00:06:00.237 "ublk_destroy_target", 00:06:00.237 "ublk_create_target", 00:06:00.237 "virtio_blk_create_transport", 00:06:00.237 "virtio_blk_get_transports", 00:06:00.237 "vhost_controller_set_coalescing", 00:06:00.237 "vhost_get_controllers", 00:06:00.237 "vhost_delete_controller", 00:06:00.237 "vhost_create_blk_controller", 00:06:00.237 "vhost_scsi_controller_remove_target", 00:06:00.237 "vhost_scsi_controller_add_target", 00:06:00.237 "vhost_start_scsi_controller", 00:06:00.237 "vhost_create_scsi_controller", 00:06:00.237 "thread_set_cpumask", 00:06:00.237 "scheduler_set_options", 00:06:00.237 "framework_get_governor", 00:06:00.237 "framework_get_scheduler", 00:06:00.237 "framework_set_scheduler", 00:06:00.237 "framework_get_reactors", 00:06:00.237 "thread_get_io_channels", 00:06:00.237 "thread_get_pollers", 00:06:00.237 "thread_get_stats", 00:06:00.237 "framework_monitor_context_switch", 00:06:00.237 "spdk_kill_instance", 00:06:00.237 "log_enable_timestamps", 00:06:00.237 "log_get_flags", 00:06:00.237 "log_clear_flag", 00:06:00.237 "log_set_flag", 00:06:00.237 "log_get_level", 00:06:00.237 "log_set_level", 00:06:00.237 "log_get_print_level", 00:06:00.237 "log_set_print_level", 00:06:00.237 "framework_enable_cpumask_locks", 00:06:00.237 "framework_disable_cpumask_locks", 00:06:00.237 "framework_wait_init", 00:06:00.237 "framework_start_init", 00:06:00.237 "scsi_get_devices", 00:06:00.237 "bdev_get_histogram", 00:06:00.237 "bdev_enable_histogram", 00:06:00.237 "bdev_set_qos_limit", 00:06:00.237 "bdev_set_qd_sampling_period", 00:06:00.237 "bdev_get_bdevs", 00:06:00.237 "bdev_reset_iostat", 00:06:00.237 "bdev_get_iostat", 00:06:00.237 "bdev_examine", 00:06:00.237 "bdev_wait_for_examine", 00:06:00.237 "bdev_set_options", 00:06:00.237 "accel_get_stats", 00:06:00.237 "accel_set_options", 00:06:00.237 "accel_set_driver", 00:06:00.237 "accel_crypto_key_destroy", 00:06:00.237 "accel_crypto_keys_get", 00:06:00.237 "accel_crypto_key_create", 00:06:00.237 "accel_assign_opc", 00:06:00.237 "accel_get_module_info", 00:06:00.237 "accel_get_opc_assignments", 00:06:00.237 "vmd_rescan", 00:06:00.237 "vmd_remove_device", 00:06:00.237 "vmd_enable", 00:06:00.237 "sock_get_default_impl", 00:06:00.237 "sock_set_default_impl", 00:06:00.237 "sock_impl_set_options", 00:06:00.237 "sock_impl_get_options", 00:06:00.237 "iobuf_get_stats", 00:06:00.237 "iobuf_set_options", 00:06:00.237 "keyring_get_keys", 00:06:00.237 "framework_get_pci_devices", 00:06:00.237 "framework_get_config", 00:06:00.237 "framework_get_subsystems", 00:06:00.237 "fsdev_set_opts", 00:06:00.237 "fsdev_get_opts", 00:06:00.237 "trace_get_info", 00:06:00.237 "trace_get_tpoint_group_mask", 00:06:00.237 "trace_disable_tpoint_group", 00:06:00.237 "trace_enable_tpoint_group", 00:06:00.237 "trace_clear_tpoint_mask", 00:06:00.237 "trace_set_tpoint_mask", 00:06:00.237 "notify_get_notifications", 00:06:00.237 "notify_get_types", 00:06:00.237 "spdk_get_version", 00:06:00.237 "rpc_get_methods" 00:06:00.237 ] 00:06:00.237 23:01:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:00.237 23:01:19 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:00.237 23:01:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.237 23:01:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:00.237 23:01:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69887 00:06:00.237 23:01:19 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69887 ']' 00:06:00.237 23:01:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69887 00:06:00.237 23:01:19 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:00.237 23:01:19 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.237 23:01:19 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69887 00:06:00.498 killing process with pid 69887 00:06:00.498 23:01:19 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.498 23:01:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.498 23:01:19 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69887' 00:06:00.498 23:01:19 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69887 00:06:00.498 23:01:19 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69887 00:06:00.757 00:06:00.757 real 0m1.817s 00:06:00.757 user 0m2.988s 00:06:00.757 sys 0m0.566s 00:06:00.757 23:01:20 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.757 23:01:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.757 ************************************ 00:06:00.757 END TEST spdkcli_tcp 00:06:00.757 ************************************ 00:06:00.757 23:01:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.757 23:01:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.757 23:01:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.757 23:01:20 -- common/autotest_common.sh@10 -- # set +x 00:06:00.757 ************************************ 00:06:00.757 START TEST dpdk_mem_utility 00:06:00.757 ************************************ 00:06:00.757 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.017 * Looking for test storage... 00:06:01.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.017 23:01:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:01.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.017 --rc genhtml_branch_coverage=1 00:06:01.017 --rc genhtml_function_coverage=1 00:06:01.017 --rc genhtml_legend=1 00:06:01.017 --rc geninfo_all_blocks=1 00:06:01.017 --rc geninfo_unexecuted_blocks=1 00:06:01.017 00:06:01.017 ' 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:01.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.017 --rc genhtml_branch_coverage=1 00:06:01.017 --rc genhtml_function_coverage=1 00:06:01.017 --rc genhtml_legend=1 00:06:01.017 --rc geninfo_all_blocks=1 00:06:01.017 --rc geninfo_unexecuted_blocks=1 00:06:01.017 00:06:01.017 ' 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:01.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.017 --rc genhtml_branch_coverage=1 00:06:01.017 --rc genhtml_function_coverage=1 00:06:01.017 --rc genhtml_legend=1 00:06:01.017 --rc geninfo_all_blocks=1 00:06:01.017 --rc geninfo_unexecuted_blocks=1 00:06:01.017 00:06:01.017 ' 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:01.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.017 --rc genhtml_branch_coverage=1 00:06:01.017 --rc genhtml_function_coverage=1 00:06:01.017 --rc genhtml_legend=1 00:06:01.017 --rc geninfo_all_blocks=1 00:06:01.017 --rc geninfo_unexecuted_blocks=1 00:06:01.017 00:06:01.017 ' 00:06:01.017 23:01:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:01.017 23:01:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69983 00:06:01.017 23:01:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.017 23:01:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69983 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 69983 ']' 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.017 23:01:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.278 [2024-11-18 23:01:20.403389] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:01.278 [2024-11-18 23:01:20.403518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69983 ] 00:06:01.278 [2024-11-18 23:01:20.560458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.278 [2024-11-18 23:01:20.606245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.850 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.850 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:01.850 23:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:01.850 23:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:01.850 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.850 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.850 { 00:06:01.850 "filename": "/tmp/spdk_mem_dump.txt" 00:06:01.850 } 00:06:01.850 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.850 23:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:02.178 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:02.178 1 heaps totaling size 860.000000 MiB 00:06:02.178 size: 860.000000 MiB heap id: 0 00:06:02.178 end heaps---------- 00:06:02.178 9 mempools totaling size 642.649841 MiB 00:06:02.178 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:02.178 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:02.178 size: 92.545471 MiB name: bdev_io_69983 00:06:02.178 size: 51.011292 MiB name: evtpool_69983 00:06:02.178 size: 50.003479 MiB name: msgpool_69983 00:06:02.178 size: 36.509338 MiB name: fsdev_io_69983 00:06:02.178 size: 21.763794 MiB name: PDU_Pool 00:06:02.178 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:02.178 size: 0.026123 MiB name: Session_Pool 00:06:02.178 end mempools------- 00:06:02.178 6 memzones totaling size 4.142822 MiB 00:06:02.178 size: 1.000366 MiB name: RG_ring_0_69983 00:06:02.178 size: 1.000366 MiB name: RG_ring_1_69983 00:06:02.178 size: 1.000366 MiB name: RG_ring_4_69983 00:06:02.178 size: 1.000366 MiB name: RG_ring_5_69983 00:06:02.178 size: 0.125366 MiB name: RG_ring_2_69983 00:06:02.178 size: 0.015991 MiB name: RG_ring_3_69983 00:06:02.178 end memzones------- 00:06:02.178 23:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.178 heap id: 0 total size: 860.000000 MiB number of busy elements: 302 number of free elements: 16 00:06:02.178 list of free elements. size: 13.937439 MiB 00:06:02.178 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:02.178 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:02.178 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:02.178 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:02.178 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:02.178 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:02.178 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:02.178 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:02.178 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:02.178 element at address: 0x20001d800000 with size: 0.568237 MiB 00:06:02.178 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:02.178 element at address: 0x200003e00000 with size: 0.488831 MiB 00:06:02.178 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:02.178 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:02.178 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:02.178 element at address: 0x200003a00000 with size: 0.353027 MiB 00:06:02.178 list of standard malloc elements. size: 199.265869 MiB 00:06:02.178 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:02.178 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:02.178 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:02.178 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:02.178 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:02.178 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:02.178 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:02.178 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:02.178 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:02.178 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:02.178 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:02.179 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:02.179 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:02.180 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:02.180 list of memzone associated elements. size: 646.796692 MiB 00:06:02.180 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:02.180 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.180 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:02.180 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.180 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:02.180 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69983_0 00:06:02.180 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:02.180 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69983_0 00:06:02.180 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:02.180 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69983_0 00:06:02.180 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:02.180 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69983_0 00:06:02.180 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:02.180 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.180 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:02.180 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.180 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:02.180 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69983 00:06:02.180 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:02.180 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69983 00:06:02.180 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:02.180 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69983 00:06:02.180 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:02.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.180 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:02.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.180 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:02.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.180 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:02.180 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.181 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:02.181 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69983 00:06:02.181 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:02.181 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69983 00:06:02.181 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:02.181 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69983 00:06:02.181 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:02.181 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69983 00:06:02.181 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:02.181 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69983 00:06:02.181 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:02.181 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69983 00:06:02.181 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:02.181 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.181 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:02.181 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.181 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:02.181 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.181 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:06:02.181 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69983 00:06:02.181 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:02.181 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.181 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:02.181 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.181 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:06:02.181 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69983 00:06:02.181 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:02.181 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.181 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:02.181 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69983 00:06:02.181 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:02.181 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69983 00:06:02.181 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:06:02.181 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69983 00:06:02.181 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:02.181 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.181 23:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.181 23:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69983 00:06:02.181 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 69983 ']' 00:06:02.181 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 69983 00:06:02.181 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:02.181 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.181 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69983 00:06:02.181 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.181 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.181 killing process with pid 69983 00:06:02.181 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69983' 00:06:02.181 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 69983 00:06:02.181 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 69983 00:06:02.458 00:06:02.458 real 0m1.639s 00:06:02.458 user 0m1.557s 00:06:02.458 sys 0m0.505s 00:06:02.458 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.458 23:01:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.458 ************************************ 00:06:02.458 END TEST dpdk_mem_utility 00:06:02.458 ************************************ 00:06:02.458 23:01:21 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:02.458 23:01:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.458 23:01:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.458 23:01:21 -- common/autotest_common.sh@10 -- # set +x 00:06:02.458 ************************************ 00:06:02.458 START TEST event 00:06:02.458 ************************************ 00:06:02.458 23:01:21 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:02.723 * Looking for test storage... 00:06:02.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:02.723 23:01:21 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.723 23:01:21 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.723 23:01:21 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.723 23:01:21 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.723 23:01:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.723 23:01:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.723 23:01:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.723 23:01:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.723 23:01:22 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.723 23:01:22 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.723 23:01:22 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.723 23:01:22 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.723 23:01:22 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.723 23:01:22 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.723 23:01:22 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.723 23:01:22 event -- scripts/common.sh@344 -- # case "$op" in 00:06:02.723 23:01:22 event -- scripts/common.sh@345 -- # : 1 00:06:02.723 23:01:22 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.723 23:01:22 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.723 23:01:22 event -- scripts/common.sh@365 -- # decimal 1 00:06:02.723 23:01:22 event -- scripts/common.sh@353 -- # local d=1 00:06:02.723 23:01:22 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.723 23:01:22 event -- scripts/common.sh@355 -- # echo 1 00:06:02.723 23:01:22 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.723 23:01:22 event -- scripts/common.sh@366 -- # decimal 2 00:06:02.723 23:01:22 event -- scripts/common.sh@353 -- # local d=2 00:06:02.723 23:01:22 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.723 23:01:22 event -- scripts/common.sh@355 -- # echo 2 00:06:02.723 23:01:22 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.723 23:01:22 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.723 23:01:22 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.723 23:01:22 event -- scripts/common.sh@368 -- # return 0 00:06:02.723 23:01:22 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.723 23:01:22 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.723 --rc genhtml_branch_coverage=1 00:06:02.723 --rc genhtml_function_coverage=1 00:06:02.723 --rc genhtml_legend=1 00:06:02.723 --rc geninfo_all_blocks=1 00:06:02.723 --rc geninfo_unexecuted_blocks=1 00:06:02.723 00:06:02.723 ' 00:06:02.723 23:01:22 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.723 --rc genhtml_branch_coverage=1 00:06:02.723 --rc genhtml_function_coverage=1 00:06:02.723 --rc genhtml_legend=1 00:06:02.723 --rc geninfo_all_blocks=1 00:06:02.723 --rc geninfo_unexecuted_blocks=1 00:06:02.723 00:06:02.723 ' 00:06:02.723 23:01:22 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.723 --rc genhtml_branch_coverage=1 00:06:02.723 --rc genhtml_function_coverage=1 00:06:02.723 --rc genhtml_legend=1 00:06:02.723 --rc geninfo_all_blocks=1 00:06:02.723 --rc geninfo_unexecuted_blocks=1 00:06:02.723 00:06:02.723 ' 00:06:02.723 23:01:22 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.723 --rc genhtml_branch_coverage=1 00:06:02.723 --rc genhtml_function_coverage=1 00:06:02.723 --rc genhtml_legend=1 00:06:02.723 --rc geninfo_all_blocks=1 00:06:02.723 --rc geninfo_unexecuted_blocks=1 00:06:02.723 00:06:02.723 ' 00:06:02.723 23:01:22 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:02.723 23:01:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:02.723 23:01:22 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.723 23:01:22 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:02.723 23:01:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.723 23:01:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.723 ************************************ 00:06:02.723 START TEST event_perf 00:06:02.723 ************************************ 00:06:02.723 23:01:22 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.723 Running I/O for 1 seconds...[2024-11-18 23:01:22.080914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:02.723 [2024-11-18 23:01:22.081036] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70069 ] 00:06:02.983 [2024-11-18 23:01:22.241819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.983 Running I/O for 1 seconds...[2024-11-18 23:01:22.288064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.983 [2024-11-18 23:01:22.288351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.983 [2024-11-18 23:01:22.288330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.983 [2024-11-18 23:01:22.288481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.368 00:06:04.368 lcore 0: 200472 00:06:04.368 lcore 1: 200471 00:06:04.368 lcore 2: 200472 00:06:04.368 lcore 3: 200471 00:06:04.368 done. 00:06:04.368 00:06:04.368 real 0m1.346s 00:06:04.368 user 0m4.124s 00:06:04.368 sys 0m0.103s 00:06:04.368 23:01:23 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.368 23:01:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.368 ************************************ 00:06:04.368 END TEST event_perf 00:06:04.368 ************************************ 00:06:04.368 23:01:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:04.368 23:01:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:04.368 23:01:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.368 23:01:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.368 ************************************ 00:06:04.368 START TEST event_reactor 00:06:04.368 ************************************ 00:06:04.368 23:01:23 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:04.368 [2024-11-18 23:01:23.496254] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:04.368 [2024-11-18 23:01:23.496377] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70103 ] 00:06:04.368 [2024-11-18 23:01:23.654906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.368 [2024-11-18 23:01:23.699726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.750 test_start 00:06:05.750 oneshot 00:06:05.750 tick 100 00:06:05.750 tick 100 00:06:05.750 tick 250 00:06:05.750 tick 100 00:06:05.750 tick 100 00:06:05.750 tick 100 00:06:05.750 tick 250 00:06:05.750 tick 500 00:06:05.750 tick 100 00:06:05.750 tick 100 00:06:05.750 tick 250 00:06:05.750 tick 100 00:06:05.750 tick 100 00:06:05.750 test_end 00:06:05.750 00:06:05.750 real 0m1.339s 00:06:05.750 user 0m1.132s 00:06:05.750 sys 0m0.101s 00:06:05.750 23:01:24 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.750 23:01:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:05.750 ************************************ 00:06:05.750 END TEST event_reactor 00:06:05.750 ************************************ 00:06:05.750 23:01:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.750 23:01:24 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:05.750 23:01:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.750 23:01:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.750 ************************************ 00:06:05.750 START TEST event_reactor_perf 00:06:05.750 ************************************ 00:06:05.750 23:01:24 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.750 [2024-11-18 23:01:24.899852] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:05.750 [2024-11-18 23:01:24.899980] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70145 ] 00:06:05.750 [2024-11-18 23:01:25.058571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.750 [2024-11-18 23:01:25.107690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.131 test_start 00:06:07.131 test_end 00:06:07.131 Performance: 414379 events per second 00:06:07.131 00:06:07.131 real 0m1.342s 00:06:07.131 user 0m1.136s 00:06:07.132 sys 0m0.098s 00:06:07.132 23:01:26 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.132 23:01:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.132 ************************************ 00:06:07.132 END TEST event_reactor_perf 00:06:07.132 ************************************ 00:06:07.132 23:01:26 event -- event/event.sh@49 -- # uname -s 00:06:07.132 23:01:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.132 23:01:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.132 23:01:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.132 23:01:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.132 23:01:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.132 ************************************ 00:06:07.132 START TEST event_scheduler 00:06:07.132 ************************************ 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.132 * Looking for test storage... 00:06:07.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.132 23:01:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:07.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.132 --rc genhtml_branch_coverage=1 00:06:07.132 --rc genhtml_function_coverage=1 00:06:07.132 --rc genhtml_legend=1 00:06:07.132 --rc geninfo_all_blocks=1 00:06:07.132 --rc geninfo_unexecuted_blocks=1 00:06:07.132 00:06:07.132 ' 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:07.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.132 --rc genhtml_branch_coverage=1 00:06:07.132 --rc genhtml_function_coverage=1 00:06:07.132 --rc genhtml_legend=1 00:06:07.132 --rc geninfo_all_blocks=1 00:06:07.132 --rc geninfo_unexecuted_blocks=1 00:06:07.132 00:06:07.132 ' 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:07.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.132 --rc genhtml_branch_coverage=1 00:06:07.132 --rc genhtml_function_coverage=1 00:06:07.132 --rc genhtml_legend=1 00:06:07.132 --rc geninfo_all_blocks=1 00:06:07.132 --rc geninfo_unexecuted_blocks=1 00:06:07.132 00:06:07.132 ' 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:07.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.132 --rc genhtml_branch_coverage=1 00:06:07.132 --rc genhtml_function_coverage=1 00:06:07.132 --rc genhtml_legend=1 00:06:07.132 --rc geninfo_all_blocks=1 00:06:07.132 --rc geninfo_unexecuted_blocks=1 00:06:07.132 00:06:07.132 ' 00:06:07.132 23:01:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.132 23:01:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70210 00:06:07.132 23:01:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.132 23:01:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.132 23:01:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70210 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70210 ']' 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.132 23:01:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.392 [2024-11-18 23:01:26.574758] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:07.392 [2024-11-18 23:01:26.574901] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70210 ] 00:06:07.392 [2024-11-18 23:01:26.728565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.652 [2024-11-18 23:01:26.775460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.652 [2024-11-18 23:01:26.775692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.652 [2024-11-18 23:01:26.775732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.652 [2024-11-18 23:01:26.775858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.223 23:01:27 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.223 23:01:27 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:08.223 23:01:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.223 23:01:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.223 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.223 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.223 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.223 POWER: Cannot set governor of lcore 0 to performance 00:06:08.223 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.223 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.223 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.223 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.223 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:08.223 POWER: Unable to set Power Management Environment for lcore 0 00:06:08.223 [2024-11-18 23:01:27.396185] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:08.223 [2024-11-18 23:01:27.396209] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:08.223 [2024-11-18 23:01:27.396224] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:08.223 [2024-11-18 23:01:27.396270] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:08.223 [2024-11-18 23:01:27.396305] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:08.223 [2024-11-18 23:01:27.396327] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:08.223 23:01:27 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.223 23:01:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.223 23:01:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.223 [2024-11-18 23:01:27.467781] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:08.223 23:01:27 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.223 23:01:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:08.223 23:01:27 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.223 23:01:27 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.223 ************************************ 00:06:08.223 START TEST scheduler_create_thread 00:06:08.223 ************************************ 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.223 2 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.223 3 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.223 4 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.223 5 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.223 6 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.223 7 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.223 8 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.223 9 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.223 23:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.602 10 00:06:09.602 23:01:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.602 23:01:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:09.602 23:01:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.602 23:01:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.983 23:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.983 23:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.983 23:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.983 23:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.983 23:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.551 23:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.551 23:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:11.551 23:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.551 23:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.489 23:01:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.489 23:01:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:12.489 23:01:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:12.489 23:01:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.489 23:01:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.059 23:01:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.059 00:06:13.059 real 0m4.779s 00:06:13.059 user 0m0.028s 00:06:13.059 sys 0m0.008s 00:06:13.059 23:01:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.059 ************************************ 00:06:13.059 END TEST scheduler_create_thread 00:06:13.059 ************************************ 00:06:13.059 23:01:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.059 23:01:32 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:13.059 23:01:32 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70210 00:06:13.059 23:01:32 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70210 ']' 00:06:13.059 23:01:32 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70210 00:06:13.059 23:01:32 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:13.059 23:01:32 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.059 23:01:32 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70210 00:06:13.059 killing process with pid 70210 00:06:13.059 23:01:32 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:13.059 23:01:32 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:13.059 23:01:32 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70210' 00:06:13.059 23:01:32 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70210 00:06:13.059 23:01:32 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70210 00:06:13.319 [2024-11-18 23:01:32.537132] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:13.579 00:06:13.579 real 0m6.549s 00:06:13.579 user 0m14.159s 00:06:13.579 sys 0m0.489s 00:06:13.579 23:01:32 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.579 23:01:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.580 ************************************ 00:06:13.580 END TEST event_scheduler 00:06:13.580 ************************************ 00:06:13.580 23:01:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:13.580 23:01:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:13.580 23:01:32 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.580 23:01:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.580 23:01:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.580 ************************************ 00:06:13.580 START TEST app_repeat 00:06:13.580 ************************************ 00:06:13.580 23:01:32 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70327 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.580 Process app_repeat pid: 70327 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70327' 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.580 spdk_app_start Round 0 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:13.580 23:01:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70327 /var/tmp/spdk-nbd.sock 00:06:13.580 23:01:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70327 ']' 00:06:13.580 23:01:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.580 23:01:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.580 23:01:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.580 23:01:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.580 23:01:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.580 [2024-11-18 23:01:32.953457] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:13.580 [2024-11-18 23:01:32.953567] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70327 ] 00:06:13.840 [2024-11-18 23:01:33.112584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.840 [2024-11-18 23:01:33.157937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.840 [2024-11-18 23:01:33.158034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.778 23:01:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.778 23:01:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:14.778 23:01:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.778 Malloc0 00:06:14.778 23:01:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.038 Malloc1 00:06:15.038 23:01:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.038 23:01:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.038 /dev/nbd0 00:06:15.298 23:01:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.298 23:01:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.298 1+0 records in 00:06:15.298 1+0 records out 00:06:15.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223626 s, 18.3 MB/s 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.298 23:01:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.298 23:01:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.298 23:01:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.298 /dev/nbd1 00:06:15.298 23:01:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.298 23:01:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:15.298 23:01:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.299 23:01:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.299 23:01:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.299 23:01:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:15.299 23:01:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.299 23:01:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.299 23:01:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.299 23:01:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.558 1+0 records in 00:06:15.558 1+0 records out 00:06:15.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340452 s, 12.0 MB/s 00:06:15.559 23:01:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.559 23:01:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.559 23:01:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.559 23:01:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.559 23:01:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.559 { 00:06:15.559 "nbd_device": "/dev/nbd0", 00:06:15.559 "bdev_name": "Malloc0" 00:06:15.559 }, 00:06:15.559 { 00:06:15.559 "nbd_device": "/dev/nbd1", 00:06:15.559 "bdev_name": "Malloc1" 00:06:15.559 } 00:06:15.559 ]' 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.559 { 00:06:15.559 "nbd_device": "/dev/nbd0", 00:06:15.559 "bdev_name": "Malloc0" 00:06:15.559 }, 00:06:15.559 { 00:06:15.559 "nbd_device": "/dev/nbd1", 00:06:15.559 "bdev_name": "Malloc1" 00:06:15.559 } 00:06:15.559 ]' 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.559 /dev/nbd1' 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.559 /dev/nbd1' 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.559 23:01:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.819 23:01:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.819 23:01:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.819 23:01:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.819 23:01:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.819 23:01:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.820 256+0 records in 00:06:15.820 256+0 records out 00:06:15.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128276 s, 81.7 MB/s 00:06:15.820 23:01:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.820 23:01:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.820 256+0 records in 00:06:15.820 256+0 records out 00:06:15.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215948 s, 48.6 MB/s 00:06:15.820 23:01:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.820 23:01:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.820 256+0 records in 00:06:15.820 256+0 records out 00:06:15.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225601 s, 46.5 MB/s 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.820 23:01:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.079 23:01:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.079 23:01:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.079 23:01:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.079 23:01:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.079 23:01:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.079 23:01:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.079 23:01:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.079 23:01:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.079 23:01:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.080 23:01:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.340 23:01:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.340 23:01:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.600 23:01:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.858 [2024-11-18 23:01:36.069470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.858 [2024-11-18 23:01:36.112929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.858 [2024-11-18 23:01:36.112933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.858 [2024-11-18 23:01:36.155152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.858 [2024-11-18 23:01:36.155254] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.156 23:01:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.156 spdk_app_start Round 1 00:06:20.156 23:01:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:20.156 23:01:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70327 /var/tmp/spdk-nbd.sock 00:06:20.156 23:01:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70327 ']' 00:06:20.156 23:01:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.156 23:01:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.156 23:01:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.156 23:01:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.156 23:01:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.156 23:01:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.156 23:01:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:20.156 23:01:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.156 Malloc0 00:06:20.156 23:01:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.156 Malloc1 00:06:20.416 23:01:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.416 /dev/nbd0 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.416 23:01:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.416 23:01:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:20.416 23:01:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:20.416 23:01:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.416 23:01:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.417 23:01:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:20.417 23:01:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:20.417 23:01:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.417 23:01:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.417 23:01:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.417 1+0 records in 00:06:20.417 1+0 records out 00:06:20.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347806 s, 11.8 MB/s 00:06:20.417 23:01:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.417 23:01:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:20.417 23:01:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.417 23:01:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.417 23:01:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:20.417 23:01:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.417 23:01:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.417 23:01:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.675 /dev/nbd1 00:06:20.675 23:01:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.675 23:01:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.675 23:01:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:20.675 23:01:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:20.676 23:01:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.676 23:01:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.676 23:01:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:20.676 23:01:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:20.676 23:01:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.676 23:01:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.676 23:01:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.676 1+0 records in 00:06:20.676 1+0 records out 00:06:20.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197903 s, 20.7 MB/s 00:06:20.676 23:01:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.676 23:01:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:20.676 23:01:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.676 23:01:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.676 23:01:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:20.676 23:01:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.676 23:01:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.676 23:01:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.676 23:01:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.676 23:01:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.935 { 00:06:20.935 "nbd_device": "/dev/nbd0", 00:06:20.935 "bdev_name": "Malloc0" 00:06:20.935 }, 00:06:20.935 { 00:06:20.935 "nbd_device": "/dev/nbd1", 00:06:20.935 "bdev_name": "Malloc1" 00:06:20.935 } 00:06:20.935 ]' 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.935 { 00:06:20.935 "nbd_device": "/dev/nbd0", 00:06:20.935 "bdev_name": "Malloc0" 00:06:20.935 }, 00:06:20.935 { 00:06:20.935 "nbd_device": "/dev/nbd1", 00:06:20.935 "bdev_name": "Malloc1" 00:06:20.935 } 00:06:20.935 ]' 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.935 /dev/nbd1' 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.935 /dev/nbd1' 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.935 256+0 records in 00:06:20.935 256+0 records out 00:06:20.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140176 s, 74.8 MB/s 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.935 23:01:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.195 256+0 records in 00:06:21.195 256+0 records out 00:06:21.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239843 s, 43.7 MB/s 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.195 256+0 records in 00:06:21.195 256+0 records out 00:06:21.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233376 s, 44.9 MB/s 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.195 23:01:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.461 23:01:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.461 23:01:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.461 23:01:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.461 23:01:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.461 23:01:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.461 23:01:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.461 23:01:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.462 23:01:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.722 23:01:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.722 23:01:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.722 23:01:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.722 23:01:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.722 23:01:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.722 23:01:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.722 23:01:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.722 23:01:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.722 23:01:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.722 23:01:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.722 23:01:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.722 23:01:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.722 23:01:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.983 23:01:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.243 [2024-11-18 23:01:41.420910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.243 [2024-11-18 23:01:41.462003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.243 [2024-11-18 23:01:41.462030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.243 [2024-11-18 23:01:41.504252] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.243 [2024-11-18 23:01:41.504313] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.536 spdk_app_start Round 2 00:06:25.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.536 23:01:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.536 23:01:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:25.536 23:01:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70327 /var/tmp/spdk-nbd.sock 00:06:25.536 23:01:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70327 ']' 00:06:25.536 23:01:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.536 23:01:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.537 23:01:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.537 23:01:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.537 23:01:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.537 23:01:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.537 23:01:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:25.537 23:01:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.537 Malloc0 00:06:25.537 23:01:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.537 Malloc1 00:06:25.537 23:01:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.537 23:01:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.798 /dev/nbd0 00:06:25.798 23:01:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.798 23:01:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.798 1+0 records in 00:06:25.798 1+0 records out 00:06:25.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029896 s, 13.7 MB/s 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:25.798 23:01:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:25.798 23:01:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.798 23:01:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.798 23:01:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.058 /dev/nbd1 00:06:26.058 23:01:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.058 23:01:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.058 1+0 records in 00:06:26.058 1+0 records out 00:06:26.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341638 s, 12.0 MB/s 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.058 23:01:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.058 23:01:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.058 23:01:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.058 23:01:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.058 23:01:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.058 23:01:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.319 { 00:06:26.319 "nbd_device": "/dev/nbd0", 00:06:26.319 "bdev_name": "Malloc0" 00:06:26.319 }, 00:06:26.319 { 00:06:26.319 "nbd_device": "/dev/nbd1", 00:06:26.319 "bdev_name": "Malloc1" 00:06:26.319 } 00:06:26.319 ]' 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.319 { 00:06:26.319 "nbd_device": "/dev/nbd0", 00:06:26.319 "bdev_name": "Malloc0" 00:06:26.319 }, 00:06:26.319 { 00:06:26.319 "nbd_device": "/dev/nbd1", 00:06:26.319 "bdev_name": "Malloc1" 00:06:26.319 } 00:06:26.319 ]' 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.319 /dev/nbd1' 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.319 /dev/nbd1' 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.319 256+0 records in 00:06:26.319 256+0 records out 00:06:26.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134213 s, 78.1 MB/s 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.319 23:01:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.579 256+0 records in 00:06:26.579 256+0 records out 00:06:26.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201906 s, 51.9 MB/s 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.579 256+0 records in 00:06:26.579 256+0 records out 00:06:26.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020242 s, 51.8 MB/s 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.579 23:01:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.839 23:01:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.839 23:01:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.839 23:01:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.839 23:01:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.839 23:01:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.839 23:01:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.839 23:01:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.839 23:01:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.839 23:01:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.839 23:01:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.839 23:01:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.839 23:01:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.839 23:01:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.839 23:01:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.839 23:01:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.839 23:01:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.839 23:01:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.839 23:01:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.839 23:01:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.839 23:01:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.099 23:01:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.099 23:01:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.359 23:01:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.618 [2024-11-18 23:01:46.804760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.618 [2024-11-18 23:01:46.846163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.618 [2024-11-18 23:01:46.846170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.619 [2024-11-18 23:01:46.888703] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.619 [2024-11-18 23:01:46.888750] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.915 23:01:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70327 /var/tmp/spdk-nbd.sock 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70327 ']' 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:30.915 23:01:49 event.app_repeat -- event/event.sh@39 -- # killprocess 70327 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70327 ']' 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70327 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70327 00:06:30.915 killing process with pid 70327 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70327' 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70327 00:06:30.915 23:01:49 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70327 00:06:30.915 spdk_app_start is called in Round 0. 00:06:30.915 Shutdown signal received, stop current app iteration 00:06:30.915 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:30.915 spdk_app_start is called in Round 1. 00:06:30.915 Shutdown signal received, stop current app iteration 00:06:30.915 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:30.915 spdk_app_start is called in Round 2. 00:06:30.915 Shutdown signal received, stop current app iteration 00:06:30.915 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:30.915 spdk_app_start is called in Round 3. 00:06:30.915 Shutdown signal received, stop current app iteration 00:06:30.915 23:01:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:30.915 23:01:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:30.915 00:06:30.915 real 0m17.187s 00:06:30.915 user 0m37.714s 00:06:30.915 sys 0m2.632s 00:06:30.915 23:01:50 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.915 23:01:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.915 ************************************ 00:06:30.915 END TEST app_repeat 00:06:30.915 ************************************ 00:06:30.915 23:01:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:30.915 23:01:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:30.915 23:01:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.915 23:01:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.915 23:01:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.915 ************************************ 00:06:30.915 START TEST cpu_locks 00:06:30.915 ************************************ 00:06:30.915 23:01:50 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:30.915 * Looking for test storage... 00:06:30.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:30.915 23:01:50 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:30.915 23:01:50 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:30.915 23:01:50 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:31.176 23:01:50 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.176 23:01:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:31.176 23:01:50 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.176 23:01:50 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:31.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.176 --rc genhtml_branch_coverage=1 00:06:31.176 --rc genhtml_function_coverage=1 00:06:31.176 --rc genhtml_legend=1 00:06:31.176 --rc geninfo_all_blocks=1 00:06:31.176 --rc geninfo_unexecuted_blocks=1 00:06:31.176 00:06:31.176 ' 00:06:31.176 23:01:50 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:31.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.176 --rc genhtml_branch_coverage=1 00:06:31.176 --rc genhtml_function_coverage=1 00:06:31.176 --rc genhtml_legend=1 00:06:31.176 --rc geninfo_all_blocks=1 00:06:31.176 --rc geninfo_unexecuted_blocks=1 00:06:31.176 00:06:31.176 ' 00:06:31.176 23:01:50 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:31.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.176 --rc genhtml_branch_coverage=1 00:06:31.176 --rc genhtml_function_coverage=1 00:06:31.176 --rc genhtml_legend=1 00:06:31.176 --rc geninfo_all_blocks=1 00:06:31.176 --rc geninfo_unexecuted_blocks=1 00:06:31.176 00:06:31.176 ' 00:06:31.176 23:01:50 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:31.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.176 --rc genhtml_branch_coverage=1 00:06:31.176 --rc genhtml_function_coverage=1 00:06:31.176 --rc genhtml_legend=1 00:06:31.176 --rc geninfo_all_blocks=1 00:06:31.176 --rc geninfo_unexecuted_blocks=1 00:06:31.176 00:06:31.176 ' 00:06:31.176 23:01:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:31.176 23:01:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:31.176 23:01:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:31.176 23:01:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:31.176 23:01:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.176 23:01:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.176 23:01:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.176 ************************************ 00:06:31.176 START TEST default_locks 00:06:31.176 ************************************ 00:06:31.176 23:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:31.176 23:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70752 00:06:31.176 23:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.176 23:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70752 00:06:31.176 23:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70752 ']' 00:06:31.176 23:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.176 23:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.176 23:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.176 23:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.176 23:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.176 [2024-11-18 23:01:50.468115] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:31.176 [2024-11-18 23:01:50.468244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70752 ] 00:06:31.436 [2024-11-18 23:01:50.627905] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.436 [2024-11-18 23:01:50.672369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.006 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.006 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:32.006 23:01:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70752 00:06:32.006 23:01:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70752 00:06:32.006 23:01:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.597 23:01:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70752 00:06:32.597 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70752 ']' 00:06:32.597 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70752 00:06:32.597 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:32.597 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.597 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70752 00:06:32.597 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.597 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.597 killing process with pid 70752 00:06:32.597 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70752' 00:06:32.597 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70752 00:06:32.597 23:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70752 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70752 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70752 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70752 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70752 ']' 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.874 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70752) - No such process 00:06:32.874 ERROR: process (pid: 70752) is no longer running 00:06:32.874 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.875 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:32.875 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:32.875 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.875 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.875 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.875 23:01:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:32.875 23:01:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.875 23:01:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.875 23:01:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.875 00:06:32.875 real 0m1.824s 00:06:32.875 user 0m1.758s 00:06:32.875 sys 0m0.658s 00:06:32.875 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.875 23:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.875 ************************************ 00:06:32.875 END TEST default_locks 00:06:32.875 ************************************ 00:06:33.135 23:01:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:33.135 23:01:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.135 23:01:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.135 23:01:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.135 ************************************ 00:06:33.135 START TEST default_locks_via_rpc 00:06:33.135 ************************************ 00:06:33.135 23:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:33.135 23:01:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70805 00:06:33.135 23:01:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.135 23:01:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70805 00:06:33.135 23:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70805 ']' 00:06:33.135 23:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.135 23:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.135 23:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.135 23:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.135 23:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.136 [2024-11-18 23:01:52.358193] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:33.136 [2024-11-18 23:01:52.358335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70805 ] 00:06:33.396 [2024-11-18 23:01:52.519630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.396 [2024-11-18 23:01:52.565557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70805 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.972 23:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70805 00:06:34.542 23:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70805 00:06:34.542 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70805 ']' 00:06:34.542 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70805 00:06:34.542 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:34.542 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.542 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70805 00:06:34.542 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.542 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.542 killing process with pid 70805 00:06:34.542 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70805' 00:06:34.542 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70805 00:06:34.542 23:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70805 00:06:34.803 00:06:34.803 real 0m1.817s 00:06:34.803 user 0m1.805s 00:06:34.803 sys 0m0.618s 00:06:34.803 23:01:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.803 23:01:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.803 ************************************ 00:06:34.803 END TEST default_locks_via_rpc 00:06:34.803 ************************************ 00:06:34.803 23:01:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:34.803 23:01:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.803 23:01:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.803 23:01:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.803 ************************************ 00:06:34.803 START TEST non_locking_app_on_locked_coremask 00:06:34.803 ************************************ 00:06:34.803 23:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:34.803 23:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70852 00:06:34.803 23:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.803 23:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70852 /var/tmp/spdk.sock 00:06:34.803 23:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70852 ']' 00:06:34.803 23:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.803 23:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.803 23:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.803 23:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.803 23:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.063 [2024-11-18 23:01:54.241937] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.063 [2024-11-18 23:01:54.242061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70852 ] 00:06:35.063 [2024-11-18 23:01:54.402167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.323 [2024-11-18 23:01:54.447136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.892 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.892 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:35.892 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:35.892 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70868 00:06:35.892 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70868 /var/tmp/spdk2.sock 00:06:35.892 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70868 ']' 00:06:35.892 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.892 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.892 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.892 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.892 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.892 [2024-11-18 23:01:55.137400] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.892 [2024-11-18 23:01:55.137548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70868 ] 00:06:36.152 [2024-11-18 23:01:55.284922] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.152 [2024-11-18 23:01:55.284972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.152 [2024-11-18 23:01:55.379649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.721 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.721 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:36.721 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70852 00:06:36.721 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70852 00:06:36.721 23:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.290 23:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70852 00:06:37.290 23:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70852 ']' 00:06:37.290 23:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70852 00:06:37.290 23:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:37.290 23:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.290 23:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70852 00:06:37.290 23:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.290 23:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.290 killing process with pid 70852 00:06:37.290 23:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70852' 00:06:37.290 23:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70852 00:06:37.290 23:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70852 00:06:38.240 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70868 00:06:38.240 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70868 ']' 00:06:38.240 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70868 00:06:38.240 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:38.240 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.240 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70868 00:06:38.240 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.241 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.241 killing process with pid 70868 00:06:38.241 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70868' 00:06:38.241 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70868 00:06:38.241 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70868 00:06:38.501 00:06:38.501 real 0m3.563s 00:06:38.501 user 0m3.736s 00:06:38.501 sys 0m1.077s 00:06:38.501 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.501 23:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.501 ************************************ 00:06:38.501 END TEST non_locking_app_on_locked_coremask 00:06:38.501 ************************************ 00:06:38.501 23:01:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:38.501 23:01:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.501 23:01:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.501 23:01:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.501 ************************************ 00:06:38.501 START TEST locking_app_on_unlocked_coremask 00:06:38.501 ************************************ 00:06:38.501 23:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:38.501 23:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70937 00:06:38.501 23:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:38.501 23:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70937 /var/tmp/spdk.sock 00:06:38.501 23:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70937 ']' 00:06:38.501 23:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.501 23:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.501 23:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.501 23:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.501 23:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.501 [2024-11-18 23:01:57.872384] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:38.501 [2024-11-18 23:01:57.872517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70937 ] 00:06:38.761 [2024-11-18 23:01:58.033847] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.761 [2024-11-18 23:01:58.034012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.761 [2024-11-18 23:01:58.078510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.332 23:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.332 23:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:39.332 23:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70947 00:06:39.332 23:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:39.332 23:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70947 /var/tmp/spdk2.sock 00:06:39.332 23:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70947 ']' 00:06:39.332 23:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.332 23:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.332 23:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.332 23:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.332 23:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.591 [2024-11-18 23:01:58.769871] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.591 [2024-11-18 23:01:58.769985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70947 ] 00:06:39.591 [2024-11-18 23:01:58.921540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.851 [2024-11-18 23:01:59.008972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.420 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.420 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:40.420 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70947 00:06:40.420 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70947 00:06:40.420 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.680 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70937 00:06:40.680 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70937 ']' 00:06:40.680 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70937 00:06:40.680 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:40.680 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.680 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70937 00:06:40.680 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.680 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.680 killing process with pid 70937 00:06:40.680 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70937' 00:06:40.680 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70937 00:06:40.680 23:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70937 00:06:41.620 23:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70947 00:06:41.620 23:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70947 ']' 00:06:41.620 23:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70947 00:06:41.620 23:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.620 23:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.620 23:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70947 00:06:41.620 23:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.620 23:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.620 killing process with pid 70947 00:06:41.620 23:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70947' 00:06:41.620 23:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70947 00:06:41.620 23:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70947 00:06:41.907 00:06:41.907 real 0m3.378s 00:06:41.907 user 0m3.509s 00:06:41.907 sys 0m1.045s 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.907 ************************************ 00:06:41.907 END TEST locking_app_on_unlocked_coremask 00:06:41.907 ************************************ 00:06:41.907 23:02:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:41.907 23:02:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.907 23:02:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.907 23:02:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.907 ************************************ 00:06:41.907 START TEST locking_app_on_locked_coremask 00:06:41.907 ************************************ 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71011 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71011 /var/tmp/spdk.sock 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71011 ']' 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.907 23:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.168 [2024-11-18 23:02:01.314554] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.168 [2024-11-18 23:02:01.314695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71011 ] 00:06:42.168 [2024-11-18 23:02:01.475911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.168 [2024-11-18 23:02:01.521002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.736 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.736 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71029 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71029 /var/tmp/spdk2.sock 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71029 /var/tmp/spdk2.sock 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71029 /var/tmp/spdk2.sock 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71029 ']' 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.996 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.996 [2024-11-18 23:02:02.205883] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.996 [2024-11-18 23:02:02.206032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71029 ] 00:06:42.996 [2024-11-18 23:02:02.355685] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71011 has claimed it. 00:06:42.996 [2024-11-18 23:02:02.355752] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.565 ERROR: process (pid: 71029) is no longer running 00:06:43.565 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71029) - No such process 00:06:43.565 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.565 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:43.565 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:43.565 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.565 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.565 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.565 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71011 00:06:43.565 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71011 00:06:43.565 23:02:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.135 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71011 00:06:44.135 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71011 ']' 00:06:44.135 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71011 00:06:44.135 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:44.135 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.135 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71011 00:06:44.135 killing process with pid 71011 00:06:44.135 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.135 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.135 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71011' 00:06:44.135 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71011 00:06:44.135 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71011 00:06:44.396 ************************************ 00:06:44.396 END TEST locking_app_on_locked_coremask 00:06:44.396 ************************************ 00:06:44.396 00:06:44.396 real 0m2.520s 00:06:44.396 user 0m2.699s 00:06:44.396 sys 0m0.771s 00:06:44.396 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.396 23:02:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.656 23:02:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:44.656 23:02:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.656 23:02:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.656 23:02:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.656 ************************************ 00:06:44.656 START TEST locking_overlapped_coremask 00:06:44.656 ************************************ 00:06:44.656 23:02:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:44.656 23:02:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71076 00:06:44.656 23:02:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:44.656 23:02:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71076 /var/tmp/spdk.sock 00:06:44.656 23:02:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71076 ']' 00:06:44.656 23:02:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.656 23:02:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.656 23:02:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.656 23:02:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.656 23:02:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.656 [2024-11-18 23:02:03.905259] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:44.656 [2024-11-18 23:02:03.905856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71076 ] 00:06:44.916 [2024-11-18 23:02:04.066358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.917 [2024-11-18 23:02:04.112712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.917 [2024-11-18 23:02:04.112804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.917 [2024-11-18 23:02:04.112913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71089 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71089 /var/tmp/spdk2.sock 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71089 /var/tmp/spdk2.sock 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71089 /var/tmp/spdk2.sock 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71089 ']' 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.485 23:02:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.485 [2024-11-18 23:02:04.803052] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.485 [2024-11-18 23:02:04.803251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71089 ] 00:06:45.745 [2024-11-18 23:02:04.956114] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71076 has claimed it. 00:06:45.745 [2024-11-18 23:02:04.956179] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.315 ERROR: process (pid: 71089) is no longer running 00:06:46.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71089) - No such process 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71076 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71076 ']' 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71076 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71076 00:06:46.315 killing process with pid 71076 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71076' 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71076 00:06:46.315 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71076 00:06:46.579 00:06:46.579 real 0m2.050s 00:06:46.579 user 0m5.449s 00:06:46.579 sys 0m0.497s 00:06:46.579 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.579 23:02:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.579 ************************************ 00:06:46.579 END TEST locking_overlapped_coremask 00:06:46.579 ************************************ 00:06:46.579 23:02:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:46.579 23:02:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.579 23:02:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.579 23:02:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.579 ************************************ 00:06:46.579 START TEST locking_overlapped_coremask_via_rpc 00:06:46.579 ************************************ 00:06:46.579 23:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:46.579 23:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71137 00:06:46.579 23:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:46.579 23:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71137 /var/tmp/spdk.sock 00:06:46.579 23:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71137 ']' 00:06:46.580 23:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.580 23:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.580 23:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.580 23:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.580 23:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.844 [2024-11-18 23:02:06.017944] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:46.844 [2024-11-18 23:02:06.018080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71137 ] 00:06:46.844 [2024-11-18 23:02:06.177105] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.844 [2024-11-18 23:02:06.177169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.103 [2024-11-18 23:02:06.223722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.103 [2024-11-18 23:02:06.223818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.103 [2024-11-18 23:02:06.223948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.673 23:02:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.673 23:02:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.673 23:02:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:47.673 23:02:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71149 00:06:47.673 23:02:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71149 /var/tmp/spdk2.sock 00:06:47.673 23:02:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71149 ']' 00:06:47.673 23:02:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.673 23:02:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.673 23:02:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.673 23:02:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.674 23:02:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.674 [2024-11-18 23:02:06.920093] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:47.674 [2024-11-18 23:02:06.920205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71149 ] 00:06:47.934 [2024-11-18 23:02:07.070972] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.934 [2024-11-18 23:02:07.071017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.934 [2024-11-18 23:02:07.171809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.934 [2024-11-18 23:02:07.171846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.934 [2024-11-18 23:02:07.171907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.504 [2024-11-18 23:02:07.765464] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71137 has claimed it. 00:06:48.504 request: 00:06:48.504 { 00:06:48.504 "method": "framework_enable_cpumask_locks", 00:06:48.504 "req_id": 1 00:06:48.504 } 00:06:48.504 Got JSON-RPC error response 00:06:48.504 response: 00:06:48.504 { 00:06:48.504 "code": -32603, 00:06:48.504 "message": "Failed to claim CPU core: 2" 00:06:48.504 } 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71137 /var/tmp/spdk.sock 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71137 ']' 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.504 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.764 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.764 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.764 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71149 /var/tmp/spdk2.sock 00:06:48.764 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71149 ']' 00:06:48.764 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.764 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.764 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.764 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.764 23:02:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.025 23:02:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.025 23:02:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:49.025 23:02:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:49.025 23:02:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.025 23:02:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.025 23:02:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.025 00:06:49.025 real 0m2.262s 00:06:49.025 user 0m1.022s 00:06:49.025 sys 0m0.173s 00:06:49.025 23:02:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.025 23:02:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.025 ************************************ 00:06:49.025 END TEST locking_overlapped_coremask_via_rpc 00:06:49.025 ************************************ 00:06:49.025 23:02:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:49.025 23:02:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71137 ]] 00:06:49.025 23:02:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71137 00:06:49.025 23:02:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71137 ']' 00:06:49.025 23:02:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71137 00:06:49.025 23:02:08 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:49.025 23:02:08 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.025 23:02:08 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71137 00:06:49.025 23:02:08 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.025 23:02:08 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.025 killing process with pid 71137 00:06:49.025 23:02:08 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71137' 00:06:49.025 23:02:08 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71137 00:06:49.026 23:02:08 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71137 00:06:49.595 23:02:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71149 ]] 00:06:49.595 23:02:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71149 00:06:49.595 23:02:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71149 ']' 00:06:49.595 23:02:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71149 00:06:49.595 23:02:08 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:49.595 23:02:08 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.595 23:02:08 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71149 00:06:49.595 23:02:08 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:49.595 23:02:08 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:49.595 killing process with pid 71149 00:06:49.595 23:02:08 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71149' 00:06:49.595 23:02:08 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71149 00:06:49.595 23:02:08 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71149 00:06:49.856 23:02:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:49.856 23:02:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:49.856 23:02:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71137 ]] 00:06:49.856 23:02:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71137 00:06:49.856 23:02:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71137 ']' 00:06:49.856 23:02:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71137 00:06:49.856 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71137) - No such process 00:06:49.856 Process with pid 71137 is not found 00:06:49.856 23:02:09 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71137 is not found' 00:06:49.856 23:02:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71149 ]] 00:06:49.856 23:02:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71149 00:06:49.856 23:02:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71149 ']' 00:06:49.856 23:02:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71149 00:06:49.856 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71149) - No such process 00:06:49.856 Process with pid 71149 is not found 00:06:49.856 23:02:09 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71149 is not found' 00:06:49.856 23:02:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:49.856 00:06:49.856 real 0m18.963s 00:06:49.856 user 0m31.182s 00:06:49.856 sys 0m5.935s 00:06:49.856 23:02:09 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.856 23:02:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.856 ************************************ 00:06:49.856 END TEST cpu_locks 00:06:49.856 ************************************ 00:06:49.856 00:06:49.856 real 0m47.364s 00:06:49.856 user 1m29.680s 00:06:49.856 sys 0m9.769s 00:06:49.856 23:02:09 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.856 23:02:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.856 ************************************ 00:06:49.856 END TEST event 00:06:49.856 ************************************ 00:06:49.856 23:02:09 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:49.856 23:02:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.856 23:02:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.856 23:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:49.856 ************************************ 00:06:49.856 START TEST thread 00:06:49.856 ************************************ 00:06:49.856 23:02:09 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:50.115 * Looking for test storage... 00:06:50.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:50.115 23:02:09 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.115 23:02:09 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.115 23:02:09 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.115 23:02:09 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.115 23:02:09 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.115 23:02:09 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.115 23:02:09 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.115 23:02:09 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.115 23:02:09 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.115 23:02:09 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.115 23:02:09 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.115 23:02:09 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:50.115 23:02:09 thread -- scripts/common.sh@345 -- # : 1 00:06:50.115 23:02:09 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.115 23:02:09 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.115 23:02:09 thread -- scripts/common.sh@365 -- # decimal 1 00:06:50.115 23:02:09 thread -- scripts/common.sh@353 -- # local d=1 00:06:50.115 23:02:09 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.115 23:02:09 thread -- scripts/common.sh@355 -- # echo 1 00:06:50.115 23:02:09 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.115 23:02:09 thread -- scripts/common.sh@366 -- # decimal 2 00:06:50.115 23:02:09 thread -- scripts/common.sh@353 -- # local d=2 00:06:50.115 23:02:09 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.115 23:02:09 thread -- scripts/common.sh@355 -- # echo 2 00:06:50.115 23:02:09 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.115 23:02:09 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.115 23:02:09 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.115 23:02:09 thread -- scripts/common.sh@368 -- # return 0 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:50.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.115 --rc genhtml_branch_coverage=1 00:06:50.115 --rc genhtml_function_coverage=1 00:06:50.115 --rc genhtml_legend=1 00:06:50.115 --rc geninfo_all_blocks=1 00:06:50.115 --rc geninfo_unexecuted_blocks=1 00:06:50.115 00:06:50.115 ' 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:50.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.115 --rc genhtml_branch_coverage=1 00:06:50.115 --rc genhtml_function_coverage=1 00:06:50.115 --rc genhtml_legend=1 00:06:50.115 --rc geninfo_all_blocks=1 00:06:50.115 --rc geninfo_unexecuted_blocks=1 00:06:50.115 00:06:50.115 ' 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:50.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.115 --rc genhtml_branch_coverage=1 00:06:50.115 --rc genhtml_function_coverage=1 00:06:50.115 --rc genhtml_legend=1 00:06:50.115 --rc geninfo_all_blocks=1 00:06:50.115 --rc geninfo_unexecuted_blocks=1 00:06:50.115 00:06:50.115 ' 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:50.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.115 --rc genhtml_branch_coverage=1 00:06:50.115 --rc genhtml_function_coverage=1 00:06:50.115 --rc genhtml_legend=1 00:06:50.115 --rc geninfo_all_blocks=1 00:06:50.115 --rc geninfo_unexecuted_blocks=1 00:06:50.115 00:06:50.115 ' 00:06:50.115 23:02:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.115 23:02:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.115 ************************************ 00:06:50.115 START TEST thread_poller_perf 00:06:50.115 ************************************ 00:06:50.115 23:02:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.374 [2024-11-18 23:02:09.505568] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:50.375 [2024-11-18 23:02:09.505737] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71287 ] 00:06:50.375 [2024-11-18 23:02:09.665457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.375 [2024-11-18 23:02:09.709350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.375 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:51.763 [2024-11-18T23:02:11.141Z] ====================================== 00:06:51.763 [2024-11-18T23:02:11.141Z] busy:2298787766 (cyc) 00:06:51.763 [2024-11-18T23:02:11.141Z] total_run_count: 424000 00:06:51.763 [2024-11-18T23:02:11.141Z] tsc_hz: 2290000000 (cyc) 00:06:51.763 [2024-11-18T23:02:11.141Z] ====================================== 00:06:51.763 [2024-11-18T23:02:11.141Z] poller_cost: 5421 (cyc), 2367 (nsec) 00:06:51.763 00:06:51.763 real 0m1.347s 00:06:51.763 user 0m1.148s 00:06:51.763 sys 0m0.093s 00:06:51.763 23:02:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.763 23:02:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.763 ************************************ 00:06:51.763 END TEST thread_poller_perf 00:06:51.763 ************************************ 00:06:51.763 23:02:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.763 23:02:10 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:51.763 23:02:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.763 23:02:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.763 ************************************ 00:06:51.763 START TEST thread_poller_perf 00:06:51.763 ************************************ 00:06:51.763 23:02:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.763 [2024-11-18 23:02:10.919745] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.763 [2024-11-18 23:02:10.919883] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71318 ] 00:06:51.763 [2024-11-18 23:02:11.078199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.763 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:51.763 [2024-11-18 23:02:11.123613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.142 [2024-11-18T23:02:12.520Z] ====================================== 00:06:53.142 [2024-11-18T23:02:12.520Z] busy:2293132710 (cyc) 00:06:53.142 [2024-11-18T23:02:12.520Z] total_run_count: 5583000 00:06:53.142 [2024-11-18T23:02:12.520Z] tsc_hz: 2290000000 (cyc) 00:06:53.142 [2024-11-18T23:02:12.520Z] ====================================== 00:06:53.142 [2024-11-18T23:02:12.520Z] poller_cost: 410 (cyc), 179 (nsec) 00:06:53.142 00:06:53.142 real 0m1.341s 00:06:53.142 user 0m1.145s 00:06:53.142 sys 0m0.090s 00:06:53.142 23:02:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.142 ************************************ 00:06:53.142 END TEST thread_poller_perf 00:06:53.142 ************************************ 00:06:53.142 23:02:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.142 23:02:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:53.142 ************************************ 00:06:53.142 END TEST thread 00:06:53.142 ************************************ 00:06:53.142 00:06:53.142 real 0m3.039s 00:06:53.142 user 0m2.462s 00:06:53.142 sys 0m0.381s 00:06:53.142 23:02:12 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.142 23:02:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.142 23:02:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:53.142 23:02:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:53.142 23:02:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.142 23:02:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.142 23:02:12 -- common/autotest_common.sh@10 -- # set +x 00:06:53.142 ************************************ 00:06:53.142 START TEST app_cmdline 00:06:53.142 ************************************ 00:06:53.142 23:02:12 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:53.142 * Looking for test storage... 00:06:53.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:53.142 23:02:12 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.142 23:02:12 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.142 23:02:12 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.402 23:02:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:53.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.402 --rc genhtml_branch_coverage=1 00:06:53.402 --rc genhtml_function_coverage=1 00:06:53.402 --rc genhtml_legend=1 00:06:53.402 --rc geninfo_all_blocks=1 00:06:53.402 --rc geninfo_unexecuted_blocks=1 00:06:53.402 00:06:53.402 ' 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:53.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.402 --rc genhtml_branch_coverage=1 00:06:53.402 --rc genhtml_function_coverage=1 00:06:53.402 --rc genhtml_legend=1 00:06:53.402 --rc geninfo_all_blocks=1 00:06:53.402 --rc geninfo_unexecuted_blocks=1 00:06:53.402 00:06:53.402 ' 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:53.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.402 --rc genhtml_branch_coverage=1 00:06:53.402 --rc genhtml_function_coverage=1 00:06:53.402 --rc genhtml_legend=1 00:06:53.402 --rc geninfo_all_blocks=1 00:06:53.402 --rc geninfo_unexecuted_blocks=1 00:06:53.402 00:06:53.402 ' 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:53.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.402 --rc genhtml_branch_coverage=1 00:06:53.402 --rc genhtml_function_coverage=1 00:06:53.402 --rc genhtml_legend=1 00:06:53.402 --rc geninfo_all_blocks=1 00:06:53.402 --rc geninfo_unexecuted_blocks=1 00:06:53.402 00:06:53.402 ' 00:06:53.402 23:02:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:53.402 23:02:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71407 00:06:53.402 23:02:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:53.402 23:02:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71407 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71407 ']' 00:06:53.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.402 23:02:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.402 [2024-11-18 23:02:12.648043] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:53.402 [2024-11-18 23:02:12.648170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71407 ] 00:06:53.663 [2024-11-18 23:02:12.807835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.663 [2024-11-18 23:02:12.851755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.233 23:02:13 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.233 23:02:13 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:54.233 23:02:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:54.493 { 00:06:54.493 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:54.493 "fields": { 00:06:54.493 "major": 24, 00:06:54.493 "minor": 9, 00:06:54.493 "patch": 1, 00:06:54.493 "suffix": "-pre", 00:06:54.493 "commit": "b18e1bd62" 00:06:54.493 } 00:06:54.493 } 00:06:54.493 23:02:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:54.493 23:02:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:54.493 23:02:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:54.493 23:02:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:54.493 23:02:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:54.493 23:02:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.493 23:02:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.493 23:02:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:54.493 23:02:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:54.493 23:02:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:54.493 23:02:13 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.753 request: 00:06:54.754 { 00:06:54.754 "method": "env_dpdk_get_mem_stats", 00:06:54.754 "req_id": 1 00:06:54.754 } 00:06:54.754 Got JSON-RPC error response 00:06:54.754 response: 00:06:54.754 { 00:06:54.754 "code": -32601, 00:06:54.754 "message": "Method not found" 00:06:54.754 } 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.754 23:02:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71407 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71407 ']' 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71407 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71407 00:06:54.754 killing process with pid 71407 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71407' 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@969 -- # kill 71407 00:06:54.754 23:02:13 app_cmdline -- common/autotest_common.sh@974 -- # wait 71407 00:06:55.014 ************************************ 00:06:55.014 END TEST app_cmdline 00:06:55.014 ************************************ 00:06:55.014 00:06:55.014 real 0m2.004s 00:06:55.014 user 0m2.226s 00:06:55.014 sys 0m0.550s 00:06:55.014 23:02:14 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.014 23:02:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.014 23:02:14 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.014 23:02:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.014 23:02:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.014 23:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:55.274 ************************************ 00:06:55.274 START TEST version 00:06:55.274 ************************************ 00:06:55.274 23:02:14 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.274 * Looking for test storage... 00:06:55.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.274 23:02:14 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.274 23:02:14 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.274 23:02:14 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.274 23:02:14 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.274 23:02:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.274 23:02:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.274 23:02:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.274 23:02:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.274 23:02:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.274 23:02:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.274 23:02:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.274 23:02:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.274 23:02:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.274 23:02:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.274 23:02:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.274 23:02:14 version -- scripts/common.sh@344 -- # case "$op" in 00:06:55.274 23:02:14 version -- scripts/common.sh@345 -- # : 1 00:06:55.274 23:02:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.274 23:02:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.274 23:02:14 version -- scripts/common.sh@365 -- # decimal 1 00:06:55.274 23:02:14 version -- scripts/common.sh@353 -- # local d=1 00:06:55.274 23:02:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.274 23:02:14 version -- scripts/common.sh@355 -- # echo 1 00:06:55.274 23:02:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.274 23:02:14 version -- scripts/common.sh@366 -- # decimal 2 00:06:55.274 23:02:14 version -- scripts/common.sh@353 -- # local d=2 00:06:55.274 23:02:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.274 23:02:14 version -- scripts/common.sh@355 -- # echo 2 00:06:55.274 23:02:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.274 23:02:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.274 23:02:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.274 23:02:14 version -- scripts/common.sh@368 -- # return 0 00:06:55.274 23:02:14 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.274 23:02:14 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.274 --rc genhtml_branch_coverage=1 00:06:55.274 --rc genhtml_function_coverage=1 00:06:55.274 --rc genhtml_legend=1 00:06:55.274 --rc geninfo_all_blocks=1 00:06:55.275 --rc geninfo_unexecuted_blocks=1 00:06:55.275 00:06:55.275 ' 00:06:55.275 23:02:14 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.275 --rc genhtml_branch_coverage=1 00:06:55.275 --rc genhtml_function_coverage=1 00:06:55.275 --rc genhtml_legend=1 00:06:55.275 --rc geninfo_all_blocks=1 00:06:55.275 --rc geninfo_unexecuted_blocks=1 00:06:55.275 00:06:55.275 ' 00:06:55.275 23:02:14 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.275 --rc genhtml_branch_coverage=1 00:06:55.275 --rc genhtml_function_coverage=1 00:06:55.275 --rc genhtml_legend=1 00:06:55.275 --rc geninfo_all_blocks=1 00:06:55.275 --rc geninfo_unexecuted_blocks=1 00:06:55.275 00:06:55.275 ' 00:06:55.275 23:02:14 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.275 --rc genhtml_branch_coverage=1 00:06:55.275 --rc genhtml_function_coverage=1 00:06:55.275 --rc genhtml_legend=1 00:06:55.275 --rc geninfo_all_blocks=1 00:06:55.275 --rc geninfo_unexecuted_blocks=1 00:06:55.275 00:06:55.275 ' 00:06:55.275 23:02:14 version -- app/version.sh@17 -- # get_header_version major 00:06:55.275 23:02:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.275 23:02:14 version -- app/version.sh@14 -- # cut -f2 00:06:55.275 23:02:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.275 23:02:14 version -- app/version.sh@17 -- # major=24 00:06:55.275 23:02:14 version -- app/version.sh@18 -- # get_header_version minor 00:06:55.275 23:02:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.275 23:02:14 version -- app/version.sh@14 -- # cut -f2 00:06:55.275 23:02:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.275 23:02:14 version -- app/version.sh@18 -- # minor=9 00:06:55.275 23:02:14 version -- app/version.sh@19 -- # get_header_version patch 00:06:55.275 23:02:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.275 23:02:14 version -- app/version.sh@14 -- # cut -f2 00:06:55.275 23:02:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.535 23:02:14 version -- app/version.sh@19 -- # patch=1 00:06:55.535 23:02:14 version -- app/version.sh@20 -- # get_header_version suffix 00:06:55.535 23:02:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.535 23:02:14 version -- app/version.sh@14 -- # cut -f2 00:06:55.535 23:02:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.535 23:02:14 version -- app/version.sh@20 -- # suffix=-pre 00:06:55.535 23:02:14 version -- app/version.sh@22 -- # version=24.9 00:06:55.535 23:02:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:55.535 23:02:14 version -- app/version.sh@25 -- # version=24.9.1 00:06:55.535 23:02:14 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:55.535 23:02:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:55.535 23:02:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:55.535 23:02:14 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:55.535 23:02:14 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:55.535 ************************************ 00:06:55.535 END TEST version 00:06:55.535 ************************************ 00:06:55.535 00:06:55.535 real 0m0.316s 00:06:55.535 user 0m0.183s 00:06:55.535 sys 0m0.190s 00:06:55.535 23:02:14 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.535 23:02:14 version -- common/autotest_common.sh@10 -- # set +x 00:06:55.535 23:02:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:55.535 23:02:14 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:55.535 23:02:14 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:55.535 23:02:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.535 23:02:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.535 23:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:55.535 ************************************ 00:06:55.535 START TEST bdev_raid 00:06:55.535 ************************************ 00:06:55.535 23:02:14 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:55.535 * Looking for test storage... 00:06:55.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:55.535 23:02:14 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.535 23:02:14 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.535 23:02:14 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.795 23:02:14 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.795 23:02:14 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:55.795 23:02:14 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.795 23:02:14 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.795 --rc genhtml_branch_coverage=1 00:06:55.795 --rc genhtml_function_coverage=1 00:06:55.795 --rc genhtml_legend=1 00:06:55.795 --rc geninfo_all_blocks=1 00:06:55.795 --rc geninfo_unexecuted_blocks=1 00:06:55.795 00:06:55.795 ' 00:06:55.795 23:02:14 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.795 --rc genhtml_branch_coverage=1 00:06:55.795 --rc genhtml_function_coverage=1 00:06:55.795 --rc genhtml_legend=1 00:06:55.795 --rc geninfo_all_blocks=1 00:06:55.795 --rc geninfo_unexecuted_blocks=1 00:06:55.795 00:06:55.795 ' 00:06:55.795 23:02:14 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.795 --rc genhtml_branch_coverage=1 00:06:55.795 --rc genhtml_function_coverage=1 00:06:55.795 --rc genhtml_legend=1 00:06:55.795 --rc geninfo_all_blocks=1 00:06:55.795 --rc geninfo_unexecuted_blocks=1 00:06:55.795 00:06:55.795 ' 00:06:55.795 23:02:14 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.796 --rc genhtml_branch_coverage=1 00:06:55.796 --rc genhtml_function_coverage=1 00:06:55.796 --rc genhtml_legend=1 00:06:55.796 --rc geninfo_all_blocks=1 00:06:55.796 --rc geninfo_unexecuted_blocks=1 00:06:55.796 00:06:55.796 ' 00:06:55.796 23:02:15 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:55.796 23:02:15 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:55.796 23:02:15 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:55.796 23:02:15 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:55.796 23:02:15 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:55.796 23:02:15 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:55.796 23:02:15 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:55.796 23:02:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.796 23:02:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.796 23:02:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.796 ************************************ 00:06:55.796 START TEST raid1_resize_data_offset_test 00:06:55.796 ************************************ 00:06:55.796 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:55.796 23:02:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71567 00:06:55.796 23:02:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71567' 00:06:55.796 Process raid pid: 71567 00:06:55.796 23:02:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.796 23:02:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71567 00:06:55.796 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71567 ']' 00:06:55.796 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.796 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.796 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.796 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.796 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.796 [2024-11-18 23:02:15.103374] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:55.796 [2024-11-18 23:02:15.103522] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.056 [2024-11-18 23:02:15.263829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.056 [2024-11-18 23:02:15.308063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.056 [2024-11-18 23:02:15.349993] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.056 [2024-11-18 23:02:15.350044] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.625 malloc0 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.625 malloc1 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.625 null0 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.625 23:02:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.625 [2024-11-18 23:02:15.997348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:56.625 [2024-11-18 23:02:15.999203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:56.625 [2024-11-18 23:02:15.999299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:56.625 [2024-11-18 23:02:15.999468] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:56.625 [2024-11-18 23:02:15.999486] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:56.625 [2024-11-18 23:02:15.999743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:56.625 [2024-11-18 23:02:15.999890] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:56.625 [2024-11-18 23:02:15.999904] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:56.625 [2024-11-18 23:02:16.000017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.885 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.886 [2024-11-18 23:02:16.057215] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.886 malloc2 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.886 [2024-11-18 23:02:16.190228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:56.886 [2024-11-18 23:02:16.195212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.886 [2024-11-18 23:02:16.197438] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71567 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71567 ']' 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71567 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.886 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71567 00:06:57.146 killing process with pid 71567 00:06:57.146 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.146 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.146 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71567' 00:06:57.146 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71567 00:06:57.146 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71567 00:06:57.146 [2024-11-18 23:02:16.287222] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.146 [2024-11-18 23:02:16.288849] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:57.146 [2024-11-18 23:02:16.288968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.146 [2024-11-18 23:02:16.288988] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:57.146 [2024-11-18 23:02:16.294239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.146 [2024-11-18 23:02:16.294549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.146 [2024-11-18 23:02:16.294567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:57.146 [2024-11-18 23:02:16.504208] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.408 ************************************ 00:06:57.408 END TEST raid1_resize_data_offset_test 00:06:57.408 ************************************ 00:06:57.408 23:02:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:57.408 00:06:57.408 real 0m1.714s 00:06:57.408 user 0m1.706s 00:06:57.408 sys 0m0.442s 00:06:57.408 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.408 23:02:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.670 23:02:16 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:57.670 23:02:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:57.670 23:02:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.670 23:02:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.670 ************************************ 00:06:57.670 START TEST raid0_resize_superblock_test 00:06:57.670 ************************************ 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71625 00:06:57.670 Process raid pid: 71625 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71625' 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71625 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71625 ']' 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.670 23:02:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.670 [2024-11-18 23:02:16.895476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:57.670 [2024-11-18 23:02:16.895622] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.930 [2024-11-18 23:02:17.054945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.930 [2024-11-18 23:02:17.099170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.930 [2024-11-18 23:02:17.141286] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.930 [2024-11-18 23:02:17.141326] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.497 malloc0 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.497 [2024-11-18 23:02:17.843280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:58.497 [2024-11-18 23:02:17.843386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.497 [2024-11-18 23:02:17.843415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:58.497 [2024-11-18 23:02:17.843427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.497 [2024-11-18 23:02:17.845474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.497 [2024-11-18 23:02:17.845510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:58.497 pt0 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.497 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.759 3f7ef68a-227b-48a4-b84b-06e360dda4c1 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.760 374ba40e-d12d-4729-9d59-b2aa1a46073f 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.760 485ee089-1287-4ff0-b2f8-0373c90ccfaf 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.760 [2024-11-18 23:02:17.978144] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 374ba40e-d12d-4729-9d59-b2aa1a46073f is claimed 00:06:58.760 [2024-11-18 23:02:17.978217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 485ee089-1287-4ff0-b2f8-0373c90ccfaf is claimed 00:06:58.760 [2024-11-18 23:02:17.978351] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:58.760 [2024-11-18 23:02:17.978364] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:58.760 [2024-11-18 23:02:17.978639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:58.760 [2024-11-18 23:02:17.978811] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:58.760 [2024-11-18 23:02:17.978822] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:58.760 [2024-11-18 23:02:17.978947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.760 23:02:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.760 [2024-11-18 23:02:18.082168] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.760 [2024-11-18 23:02:18.125998] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:58.760 [2024-11-18 23:02:18.126022] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '374ba40e-d12d-4729-9d59-b2aa1a46073f' was resized: old size 131072, new size 204800 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.760 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.760 [2024-11-18 23:02:18.133907] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:58.760 [2024-11-18 23:02:18.133928] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '485ee089-1287-4ff0-b2f8-0373c90ccfaf' was resized: old size 131072, new size 204800 00:06:58.760 [2024-11-18 23:02:18.133956] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.022 [2024-11-18 23:02:18.249824] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.022 [2024-11-18 23:02:18.293585] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:59.022 [2024-11-18 23:02:18.293688] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:59.022 [2024-11-18 23:02:18.293723] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:59.022 [2024-11-18 23:02:18.293757] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:59.022 [2024-11-18 23:02:18.293885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.022 [2024-11-18 23:02:18.293948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.022 [2024-11-18 23:02:18.293990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.022 [2024-11-18 23:02:18.305502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:59.022 [2024-11-18 23:02:18.305559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.022 [2024-11-18 23:02:18.305579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:59.022 [2024-11-18 23:02:18.305592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.022 [2024-11-18 23:02:18.307616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.022 [2024-11-18 23:02:18.307655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:59.022 [2024-11-18 23:02:18.308986] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 374ba40e-d12d-4729-9d59-b2aa1a46073f 00:06:59.022 [2024-11-18 23:02:18.309089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 374ba40e-d12d-4729-9d59-b2aa1a46073f is claimed 00:06:59.022 [2024-11-18 23:02:18.309174] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 485ee089-1287-4ff0-b2f8-0373c90ccfaf 00:06:59.022 [2024-11-18 23:02:18.309195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 485ee089-1287-4ff0-b2f8-0373c90ccfaf is claimed 00:06:59.022 [2024-11-18 23:02:18.309268] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 485ee089-1287-4ff0-b2f8-0373c90ccfaf (2) smaller than existing raid bdev Raid (3) 00:06:59.022 [2024-11-18 23:02:18.309304] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 374ba40e-d12d-4729-9d59-b2aa1a46073f: File exists 00:06:59.022 [2024-11-18 23:02:18.309342] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:59.022 [2024-11-18 23:02:18.309351] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:59.022 [2024-11-18 23:02:18.309569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:59.022 [2024-11-18 23:02:18.309684] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:59.022 [2024-11-18 23:02:18.309692] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:59.022 [2024-11-18 23:02:18.309830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.022 pt0 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.022 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.023 [2024-11-18 23:02:18.333963] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71625 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71625 ']' 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71625 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.023 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71625 00:06:59.287 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.287 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.287 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71625' 00:06:59.287 killing process with pid 71625 00:06:59.287 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71625 00:06:59.287 [2024-11-18 23:02:18.417392] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.287 [2024-11-18 23:02:18.417459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.287 [2024-11-18 23:02:18.417497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.287 [2024-11-18 23:02:18.417505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:59.287 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71625 00:06:59.287 [2024-11-18 23:02:18.575997] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.549 ************************************ 00:06:59.549 END TEST raid0_resize_superblock_test 00:06:59.549 ************************************ 00:06:59.549 23:02:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:59.549 00:06:59.549 real 0m2.005s 00:06:59.549 user 0m2.279s 00:06:59.549 sys 0m0.492s 00:06:59.549 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.549 23:02:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.549 23:02:18 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:59.549 23:02:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.549 23:02:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.549 23:02:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.549 ************************************ 00:06:59.549 START TEST raid1_resize_superblock_test 00:06:59.549 ************************************ 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71696 00:06:59.549 Process raid pid: 71696 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71696' 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71696 00:06:59.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71696 ']' 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.549 23:02:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.808 [2024-11-18 23:02:18.959622] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:59.808 [2024-11-18 23:02:18.959766] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.808 [2024-11-18 23:02:19.120177] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.808 [2024-11-18 23:02:19.164567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.067 [2024-11-18 23:02:19.206669] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.067 [2024-11-18 23:02:19.206784] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.639 malloc0 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.639 [2024-11-18 23:02:19.909026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:00.639 [2024-11-18 23:02:19.909100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.639 [2024-11-18 23:02:19.909130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:00.639 [2024-11-18 23:02:19.909143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.639 [2024-11-18 23:02:19.911318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.639 [2024-11-18 23:02:19.911360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:00.639 pt0 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.639 23:02:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.639 ec1d2036-9686-41d2-b56f-b9fc6d55fd4d 00:07:00.639 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.639 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:00.639 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.639 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.899 2d39242d-16e7-42c4-9f94-c1cd40276223 00:07:00.899 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.899 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:00.899 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.899 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.899 94c79acf-3e27-4043-bf72-62259c2f9b07 00:07:00.899 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.899 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:00.899 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:00.899 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.899 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.899 [2024-11-18 23:02:20.043995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2d39242d-16e7-42c4-9f94-c1cd40276223 is claimed 00:07:00.899 [2024-11-18 23:02:20.044074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 94c79acf-3e27-4043-bf72-62259c2f9b07 is claimed 00:07:00.899 [2024-11-18 23:02:20.044188] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:00.899 [2024-11-18 23:02:20.044204] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:00.899 [2024-11-18 23:02:20.044459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:00.899 [2024-11-18 23:02:20.044650] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:00.899 [2024-11-18 23:02:20.044668] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:00.899 [2024-11-18 23:02:20.044800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.899 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.899 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.900 [2024-11-18 23:02:20.159999] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.900 [2024-11-18 23:02:20.191842] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.900 [2024-11-18 23:02:20.191872] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2d39242d-16e7-42c4-9f94-c1cd40276223' was resized: old size 131072, new size 204800 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.900 [2024-11-18 23:02:20.203766] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.900 [2024-11-18 23:02:20.203788] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '94c79acf-3e27-4043-bf72-62259c2f9b07' was resized: old size 131072, new size 204800 00:07:00.900 [2024-11-18 23:02:20.203814] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.900 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.161 [2024-11-18 23:02:20.291725] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.161 [2024-11-18 23:02:20.339509] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:01.161 [2024-11-18 23:02:20.339571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:01.161 [2024-11-18 23:02:20.339598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:01.161 [2024-11-18 23:02:20.339732] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.161 [2024-11-18 23:02:20.339875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.161 [2024-11-18 23:02:20.339925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.161 [2024-11-18 23:02:20.339936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.161 [2024-11-18 23:02:20.351394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:01.161 [2024-11-18 23:02:20.351450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.161 [2024-11-18 23:02:20.351471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:01.161 [2024-11-18 23:02:20.351483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.161 [2024-11-18 23:02:20.353552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.161 [2024-11-18 23:02:20.353589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:01.161 [2024-11-18 23:02:20.354928] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2d39242d-16e7-42c4-9f94-c1cd40276223 00:07:01.161 [2024-11-18 23:02:20.355033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2d39242d-16e7-42c4-9f94-c1cd40276223 is claimed 00:07:01.161 [2024-11-18 23:02:20.355124] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 94c79acf-3e27-4043-bf72-62259c2f9b07 00:07:01.161 [2024-11-18 23:02:20.355154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 94c79acf-3e27-4043-bf72-62259c2f9b07 is claimed 00:07:01.161 [2024-11-18 23:02:20.355252] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 94c79acf-3e27-4043-bf72-62259c2f9b07 (2) smaller than existing raid bdev Raid (3) 00:07:01.161 [2024-11-18 23:02:20.355270] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 2d39242d-16e7-42c4-9f94-c1cd40276223: File exists 00:07:01.161 [2024-11-18 23:02:20.355320] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:07:01.161 [2024-11-18 23:02:20.355330] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:01.161 [2024-11-18 23:02:20.355566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:01.161 [2024-11-18 23:02:20.355686] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:07:01.161 [2024-11-18 23:02:20.355695] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:07:01.161 [2024-11-18 23:02:20.355840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.161 pt0 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:01.161 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.162 [2024-11-18 23:02:20.379974] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71696 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71696 ']' 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71696 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71696 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.162 killing process with pid 71696 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71696' 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71696 00:07:01.162 [2024-11-18 23:02:20.459886] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.162 [2024-11-18 23:02:20.459947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.162 [2024-11-18 23:02:20.459990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.162 [2024-11-18 23:02:20.459998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:07:01.162 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71696 00:07:01.422 [2024-11-18 23:02:20.618308] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.682 23:02:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:01.682 ************************************ 00:07:01.682 END TEST raid1_resize_superblock_test 00:07:01.682 ************************************ 00:07:01.682 00:07:01.682 real 0m1.972s 00:07:01.682 user 0m2.212s 00:07:01.682 sys 0m0.506s 00:07:01.682 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.682 23:02:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.682 23:02:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:01.682 23:02:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:01.682 23:02:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:01.682 23:02:20 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:01.682 23:02:20 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:01.682 23:02:20 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:01.682 23:02:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:01.682 23:02:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.682 23:02:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.682 ************************************ 00:07:01.682 START TEST raid_function_test_raid0 00:07:01.682 ************************************ 00:07:01.682 23:02:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:01.683 Process raid pid: 71771 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71771 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71771' 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71771 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71771 ']' 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.683 23:02:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:01.683 [2024-11-18 23:02:21.024948] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.683 [2024-11-18 23:02:21.025144] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.950 [2024-11-18 23:02:21.183528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.950 [2024-11-18 23:02:21.227584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.950 [2024-11-18 23:02:21.269696] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.950 [2024-11-18 23:02:21.269804] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:02.527 Base_1 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:02.527 Base_2 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:02.527 [2024-11-18 23:02:21.894000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:02.527 [2024-11-18 23:02:21.895855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:02.527 [2024-11-18 23:02:21.895918] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:02.527 [2024-11-18 23:02:21.895929] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:02.527 [2024-11-18 23:02:21.896171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:02.527 [2024-11-18 23:02:21.896297] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:02.527 [2024-11-18 23:02:21.896313] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:02.527 [2024-11-18 23:02:21.896447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:02.527 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.528 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:02.793 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:02.793 23:02:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.793 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:02.793 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:02.793 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:02.794 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:02.794 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:02.794 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.794 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:02.794 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.794 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:02.794 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.794 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:02.794 23:02:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:02.794 [2024-11-18 23:02:22.109666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:02.794 /dev/nbd0 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.794 1+0 records in 00:07:02.794 1+0 records out 00:07:02.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288132 s, 14.2 MB/s 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:02.794 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.056 { 00:07:03.056 "nbd_device": "/dev/nbd0", 00:07:03.056 "bdev_name": "raid" 00:07:03.056 } 00:07:03.056 ]' 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.056 { 00:07:03.056 "nbd_device": "/dev/nbd0", 00:07:03.056 "bdev_name": "raid" 00:07:03.056 } 00:07:03.056 ]' 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:03.056 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:03.316 4096+0 records in 00:07:03.316 4096+0 records out 00:07:03.316 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0250461 s, 83.7 MB/s 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:03.316 4096+0 records in 00:07:03.316 4096+0 records out 00:07:03.316 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.172976 s, 12.1 MB/s 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:03.316 128+0 records in 00:07:03.316 128+0 records out 00:07:03.316 65536 bytes (66 kB, 64 KiB) copied, 0.000375992 s, 174 MB/s 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:03.316 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:03.589 2035+0 records in 00:07:03.589 2035+0 records out 00:07:03.589 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0127026 s, 82.0 MB/s 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:03.589 456+0 records in 00:07:03.589 456+0 records out 00:07:03.589 233472 bytes (233 kB, 228 KiB) copied, 0.00338461 s, 69.0 MB/s 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.589 [2024-11-18 23:02:22.956771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.589 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:03.850 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.850 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:03.850 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:03.850 23:02:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71771 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71771 ']' 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71771 00:07:03.850 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:04.110 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.110 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71771 00:07:04.110 killing process with pid 71771 00:07:04.110 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.110 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.110 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71771' 00:07:04.110 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71771 00:07:04.110 [2024-11-18 23:02:23.263625] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.110 [2024-11-18 23:02:23.263745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.110 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71771 00:07:04.110 [2024-11-18 23:02:23.263800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.110 [2024-11-18 23:02:23.263818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:04.110 [2024-11-18 23:02:23.286319] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.371 ************************************ 00:07:04.371 END TEST raid_function_test_raid0 00:07:04.371 ************************************ 00:07:04.371 23:02:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:04.371 00:07:04.371 real 0m2.584s 00:07:04.371 user 0m3.154s 00:07:04.371 sys 0m0.871s 00:07:04.371 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.371 23:02:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:04.371 23:02:23 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:04.371 23:02:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:04.371 23:02:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.371 23:02:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.371 ************************************ 00:07:04.371 START TEST raid_function_test_concat 00:07:04.371 ************************************ 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:04.371 Process raid pid: 71887 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71887 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71887' 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71887 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 71887 ']' 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.371 23:02:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:04.371 [2024-11-18 23:02:23.680759] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:04.371 [2024-11-18 23:02:23.680972] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.631 [2024-11-18 23:02:23.840807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.631 [2024-11-18 23:02:23.888167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.631 [2024-11-18 23:02:23.930448] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.631 [2024-11-18 23:02:23.930548] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.201 Base_1 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.201 Base_2 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.201 [2024-11-18 23:02:24.555399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:05.201 [2024-11-18 23:02:24.559008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:05.201 [2024-11-18 23:02:24.559133] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:05.201 [2024-11-18 23:02:24.559173] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:05.201 [2024-11-18 23:02:24.559729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:05.201 [2024-11-18 23:02:24.560001] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:05.201 [2024-11-18 23:02:24.560023] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:05.201 [2024-11-18 23:02:24.560366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.201 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:05.461 [2024-11-18 23:02:24.770894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:05.461 /dev/nbd0 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.461 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.462 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.462 1+0 records in 00:07:05.462 1+0 records out 00:07:05.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217616 s, 18.8 MB/s 00:07:05.462 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.462 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:05.462 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.462 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.462 23:02:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:05.462 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.462 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:05.462 23:02:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:05.462 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.462 23:02:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:05.721 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.721 { 00:07:05.721 "nbd_device": "/dev/nbd0", 00:07:05.721 "bdev_name": "raid" 00:07:05.721 } 00:07:05.721 ]' 00:07:05.721 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.721 { 00:07:05.721 "nbd_device": "/dev/nbd0", 00:07:05.721 "bdev_name": "raid" 00:07:05.721 } 00:07:05.721 ]' 00:07:05.721 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.721 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:05.721 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:05.722 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:05.982 4096+0 records in 00:07:05.982 4096+0 records out 00:07:05.982 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0329177 s, 63.7 MB/s 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:05.982 4096+0 records in 00:07:05.982 4096+0 records out 00:07:05.982 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.178336 s, 11.8 MB/s 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:05.982 128+0 records in 00:07:05.982 128+0 records out 00:07:05.982 65536 bytes (66 kB, 64 KiB) copied, 0.0016525 s, 39.7 MB/s 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:05.982 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:06.241 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.241 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:06.241 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.241 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:06.241 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:06.241 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:06.241 2035+0 records in 00:07:06.241 2035+0 records out 00:07:06.241 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0130141 s, 80.1 MB/s 00:07:06.241 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:06.242 456+0 records in 00:07:06.242 456+0 records out 00:07:06.242 233472 bytes (233 kB, 228 KiB) copied, 0.00282146 s, 82.7 MB/s 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.242 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.501 [2024-11-18 23:02:25.643636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.501 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71887 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 71887 ']' 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 71887 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71887 00:07:06.762 killing process with pid 71887 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71887' 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 71887 00:07:06.762 [2024-11-18 23:02:25.945888] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.762 [2024-11-18 23:02:25.945997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.762 23:02:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 71887 00:07:06.762 [2024-11-18 23:02:25.946050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.762 [2024-11-18 23:02:25.946062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:06.762 [2024-11-18 23:02:25.968789] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:07.021 23:02:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:07.021 00:07:07.021 real 0m2.611s 00:07:07.021 user 0m3.169s 00:07:07.021 sys 0m0.918s 00:07:07.021 ************************************ 00:07:07.021 END TEST raid_function_test_concat 00:07:07.021 ************************************ 00:07:07.021 23:02:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.021 23:02:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:07.021 23:02:26 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:07.021 23:02:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.021 23:02:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.021 23:02:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.021 ************************************ 00:07:07.021 START TEST raid0_resize_test 00:07:07.021 ************************************ 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71998 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71998' 00:07:07.021 Process raid pid: 71998 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71998 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 71998 ']' 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.021 23:02:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.021 [2024-11-18 23:02:26.364841] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:07.021 [2024-11-18 23:02:26.365044] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.281 [2024-11-18 23:02:26.522967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.281 [2024-11-18 23:02:26.567880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.281 [2024-11-18 23:02:26.610943] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.281 [2024-11-18 23:02:26.611052] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.853 Base_1 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.853 Base_2 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.853 [2024-11-18 23:02:27.212862] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:07.853 [2024-11-18 23:02:27.214638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:07.853 [2024-11-18 23:02:27.214692] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:07.853 [2024-11-18 23:02:27.214702] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:07.853 [2024-11-18 23:02:27.214948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:07.853 [2024-11-18 23:02:27.215044] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:07.853 [2024-11-18 23:02:27.215052] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:07.853 [2024-11-18 23:02:27.215179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.853 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.853 [2024-11-18 23:02:27.224826] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.853 [2024-11-18 23:02:27.224851] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:08.113 true 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.113 [2024-11-18 23:02:27.240964] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.113 [2024-11-18 23:02:27.284706] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:08.113 [2024-11-18 23:02:27.284766] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:08.113 [2024-11-18 23:02:27.284818] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:08.113 true 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.113 [2024-11-18 23:02:27.296847] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71998 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 71998 ']' 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 71998 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71998 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71998' 00:07:08.113 killing process with pid 71998 00:07:08.113 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 71998 00:07:08.113 [2024-11-18 23:02:27.382577] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.113 [2024-11-18 23:02:27.382704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.113 [2024-11-18 23:02:27.382780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 71998 00:07:08.113 ee all in destruct 00:07:08.113 [2024-11-18 23:02:27.382825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:08.113 [2024-11-18 23:02:27.384316] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.377 23:02:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:08.378 00:07:08.378 real 0m1.347s 00:07:08.378 user 0m1.497s 00:07:08.378 sys 0m0.312s 00:07:08.378 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.378 23:02:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.378 ************************************ 00:07:08.378 END TEST raid0_resize_test 00:07:08.378 ************************************ 00:07:08.378 23:02:27 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:08.378 23:02:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.378 23:02:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.378 23:02:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.378 ************************************ 00:07:08.378 START TEST raid1_resize_test 00:07:08.378 ************************************ 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:08.378 Process raid pid: 72049 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72049 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72049' 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72049 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72049 ']' 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.378 23:02:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.638 [2024-11-18 23:02:27.783835] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:08.638 [2024-11-18 23:02:27.784048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.638 [2024-11-18 23:02:27.945984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.638 [2024-11-18 23:02:27.993311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.898 [2024-11-18 23:02:28.035876] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.898 [2024-11-18 23:02:28.035912] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.468 Base_1 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.468 Base_2 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.468 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.468 [2024-11-18 23:02:28.625816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:09.469 [2024-11-18 23:02:28.627646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:09.469 [2024-11-18 23:02:28.627705] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:09.469 [2024-11-18 23:02:28.627715] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:09.469 [2024-11-18 23:02:28.627954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:09.469 [2024-11-18 23:02:28.628060] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:09.469 [2024-11-18 23:02:28.628073] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:09.469 [2024-11-18 23:02:28.628191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.469 [2024-11-18 23:02:28.637758] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:09.469 [2024-11-18 23:02:28.637826] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:09.469 true 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.469 [2024-11-18 23:02:28.653924] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.469 [2024-11-18 23:02:28.697667] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:09.469 [2024-11-18 23:02:28.697688] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:09.469 [2024-11-18 23:02:28.697715] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:09.469 true 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.469 [2024-11-18 23:02:28.713791] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72049 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72049 ']' 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72049 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72049 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72049' 00:07:09.469 killing process with pid 72049 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72049 00:07:09.469 [2024-11-18 23:02:28.787468] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.469 [2024-11-18 23:02:28.787595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.469 23:02:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72049 00:07:09.469 [2024-11-18 23:02:28.788016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.469 [2024-11-18 23:02:28.788080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:09.469 [2024-11-18 23:02:28.789229] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.733 23:02:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:09.733 00:07:09.733 real 0m1.327s 00:07:09.733 user 0m1.472s 00:07:09.733 sys 0m0.304s 00:07:09.733 23:02:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.733 23:02:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.733 ************************************ 00:07:09.733 END TEST raid1_resize_test 00:07:09.733 ************************************ 00:07:09.733 23:02:29 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:09.733 23:02:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:09.734 23:02:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:09.734 23:02:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:09.734 23:02:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.734 23:02:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.734 ************************************ 00:07:09.734 START TEST raid_state_function_test 00:07:09.734 ************************************ 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.734 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:09.735 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:09.735 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:09.735 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:09.735 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:09.735 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:09.735 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:09.735 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:09.735 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:10.004 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:10.004 Process raid pid: 72095 00:07:10.004 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72095 00:07:10.004 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:10.004 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72095' 00:07:10.004 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72095 00:07:10.004 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72095 ']' 00:07:10.004 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.004 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.004 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.004 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.004 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.004 [2024-11-18 23:02:29.215328] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:10.004 [2024-11-18 23:02:29.215579] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.263 [2024-11-18 23:02:29.385301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.263 [2024-11-18 23:02:29.429812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.263 [2024-11-18 23:02:29.473172] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.263 [2024-11-18 23:02:29.473336] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.833 [2024-11-18 23:02:30.051352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:10.833 [2024-11-18 23:02:30.051450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:10.833 [2024-11-18 23:02:30.051465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.833 [2024-11-18 23:02:30.051476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.833 "name": "Existed_Raid", 00:07:10.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.833 "strip_size_kb": 64, 00:07:10.833 "state": "configuring", 00:07:10.833 "raid_level": "raid0", 00:07:10.833 "superblock": false, 00:07:10.833 "num_base_bdevs": 2, 00:07:10.833 "num_base_bdevs_discovered": 0, 00:07:10.833 "num_base_bdevs_operational": 2, 00:07:10.833 "base_bdevs_list": [ 00:07:10.833 { 00:07:10.833 "name": "BaseBdev1", 00:07:10.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.833 "is_configured": false, 00:07:10.833 "data_offset": 0, 00:07:10.833 "data_size": 0 00:07:10.833 }, 00:07:10.833 { 00:07:10.833 "name": "BaseBdev2", 00:07:10.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.833 "is_configured": false, 00:07:10.833 "data_offset": 0, 00:07:10.833 "data_size": 0 00:07:10.833 } 00:07:10.833 ] 00:07:10.833 }' 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.833 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.403 [2024-11-18 23:02:30.498471] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.403 [2024-11-18 23:02:30.498559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.403 [2024-11-18 23:02:30.510473] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.403 [2024-11-18 23:02:30.510559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.403 [2024-11-18 23:02:30.510586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.403 [2024-11-18 23:02:30.510607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.403 [2024-11-18 23:02:30.531206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.403 BaseBdev1 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.403 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.403 [ 00:07:11.403 { 00:07:11.403 "name": "BaseBdev1", 00:07:11.403 "aliases": [ 00:07:11.403 "5efdd5a0-1b58-4dc8-87bf-a9577192c4a0" 00:07:11.403 ], 00:07:11.403 "product_name": "Malloc disk", 00:07:11.403 "block_size": 512, 00:07:11.403 "num_blocks": 65536, 00:07:11.403 "uuid": "5efdd5a0-1b58-4dc8-87bf-a9577192c4a0", 00:07:11.403 "assigned_rate_limits": { 00:07:11.403 "rw_ios_per_sec": 0, 00:07:11.403 "rw_mbytes_per_sec": 0, 00:07:11.403 "r_mbytes_per_sec": 0, 00:07:11.403 "w_mbytes_per_sec": 0 00:07:11.403 }, 00:07:11.403 "claimed": true, 00:07:11.403 "claim_type": "exclusive_write", 00:07:11.403 "zoned": false, 00:07:11.403 "supported_io_types": { 00:07:11.403 "read": true, 00:07:11.403 "write": true, 00:07:11.403 "unmap": true, 00:07:11.403 "flush": true, 00:07:11.403 "reset": true, 00:07:11.403 "nvme_admin": false, 00:07:11.403 "nvme_io": false, 00:07:11.403 "nvme_io_md": false, 00:07:11.403 "write_zeroes": true, 00:07:11.403 "zcopy": true, 00:07:11.403 "get_zone_info": false, 00:07:11.404 "zone_management": false, 00:07:11.404 "zone_append": false, 00:07:11.404 "compare": false, 00:07:11.404 "compare_and_write": false, 00:07:11.404 "abort": true, 00:07:11.404 "seek_hole": false, 00:07:11.404 "seek_data": false, 00:07:11.404 "copy": true, 00:07:11.404 "nvme_iov_md": false 00:07:11.404 }, 00:07:11.404 "memory_domains": [ 00:07:11.404 { 00:07:11.404 "dma_device_id": "system", 00:07:11.404 "dma_device_type": 1 00:07:11.404 }, 00:07:11.404 { 00:07:11.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.404 "dma_device_type": 2 00:07:11.404 } 00:07:11.404 ], 00:07:11.404 "driver_specific": {} 00:07:11.404 } 00:07:11.404 ] 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.404 "name": "Existed_Raid", 00:07:11.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.404 "strip_size_kb": 64, 00:07:11.404 "state": "configuring", 00:07:11.404 "raid_level": "raid0", 00:07:11.404 "superblock": false, 00:07:11.404 "num_base_bdevs": 2, 00:07:11.404 "num_base_bdevs_discovered": 1, 00:07:11.404 "num_base_bdevs_operational": 2, 00:07:11.404 "base_bdevs_list": [ 00:07:11.404 { 00:07:11.404 "name": "BaseBdev1", 00:07:11.404 "uuid": "5efdd5a0-1b58-4dc8-87bf-a9577192c4a0", 00:07:11.404 "is_configured": true, 00:07:11.404 "data_offset": 0, 00:07:11.404 "data_size": 65536 00:07:11.404 }, 00:07:11.404 { 00:07:11.404 "name": "BaseBdev2", 00:07:11.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.404 "is_configured": false, 00:07:11.404 "data_offset": 0, 00:07:11.404 "data_size": 0 00:07:11.404 } 00:07:11.404 ] 00:07:11.404 }' 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.404 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.664 [2024-11-18 23:02:30.974442] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.664 [2024-11-18 23:02:30.974537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.664 [2024-11-18 23:02:30.986456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.664 [2024-11-18 23:02:30.988252] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.664 [2024-11-18 23:02:30.988302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.664 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.664 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.664 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.664 "name": "Existed_Raid", 00:07:11.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.664 "strip_size_kb": 64, 00:07:11.664 "state": "configuring", 00:07:11.664 "raid_level": "raid0", 00:07:11.664 "superblock": false, 00:07:11.664 "num_base_bdevs": 2, 00:07:11.664 "num_base_bdevs_discovered": 1, 00:07:11.664 "num_base_bdevs_operational": 2, 00:07:11.664 "base_bdevs_list": [ 00:07:11.664 { 00:07:11.664 "name": "BaseBdev1", 00:07:11.664 "uuid": "5efdd5a0-1b58-4dc8-87bf-a9577192c4a0", 00:07:11.664 "is_configured": true, 00:07:11.664 "data_offset": 0, 00:07:11.664 "data_size": 65536 00:07:11.664 }, 00:07:11.664 { 00:07:11.664 "name": "BaseBdev2", 00:07:11.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.664 "is_configured": false, 00:07:11.664 "data_offset": 0, 00:07:11.664 "data_size": 0 00:07:11.664 } 00:07:11.664 ] 00:07:11.664 }' 00:07:11.664 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.664 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.238 [2024-11-18 23:02:31.436341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:12.238 [2024-11-18 23:02:31.436506] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:12.238 [2024-11-18 23:02:31.436563] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:12.238 [2024-11-18 23:02:31.437183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:12.238 [2024-11-18 23:02:31.437533] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:12.238 [2024-11-18 23:02:31.437630] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:12.238 [2024-11-18 23:02:31.438101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.238 BaseBdev2 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.238 [ 00:07:12.238 { 00:07:12.238 "name": "BaseBdev2", 00:07:12.238 "aliases": [ 00:07:12.238 "7932af6c-458a-4b17-986d-ee421203176e" 00:07:12.238 ], 00:07:12.238 "product_name": "Malloc disk", 00:07:12.238 "block_size": 512, 00:07:12.238 "num_blocks": 65536, 00:07:12.238 "uuid": "7932af6c-458a-4b17-986d-ee421203176e", 00:07:12.238 "assigned_rate_limits": { 00:07:12.238 "rw_ios_per_sec": 0, 00:07:12.238 "rw_mbytes_per_sec": 0, 00:07:12.238 "r_mbytes_per_sec": 0, 00:07:12.238 "w_mbytes_per_sec": 0 00:07:12.238 }, 00:07:12.238 "claimed": true, 00:07:12.238 "claim_type": "exclusive_write", 00:07:12.238 "zoned": false, 00:07:12.238 "supported_io_types": { 00:07:12.238 "read": true, 00:07:12.238 "write": true, 00:07:12.238 "unmap": true, 00:07:12.238 "flush": true, 00:07:12.238 "reset": true, 00:07:12.238 "nvme_admin": false, 00:07:12.238 "nvme_io": false, 00:07:12.238 "nvme_io_md": false, 00:07:12.238 "write_zeroes": true, 00:07:12.238 "zcopy": true, 00:07:12.238 "get_zone_info": false, 00:07:12.238 "zone_management": false, 00:07:12.238 "zone_append": false, 00:07:12.238 "compare": false, 00:07:12.238 "compare_and_write": false, 00:07:12.238 "abort": true, 00:07:12.238 "seek_hole": false, 00:07:12.238 "seek_data": false, 00:07:12.238 "copy": true, 00:07:12.238 "nvme_iov_md": false 00:07:12.238 }, 00:07:12.238 "memory_domains": [ 00:07:12.238 { 00:07:12.238 "dma_device_id": "system", 00:07:12.238 "dma_device_type": 1 00:07:12.238 }, 00:07:12.238 { 00:07:12.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.238 "dma_device_type": 2 00:07:12.238 } 00:07:12.238 ], 00:07:12.238 "driver_specific": {} 00:07:12.238 } 00:07:12.238 ] 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.238 "name": "Existed_Raid", 00:07:12.238 "uuid": "4d3d91d1-b3f5-459d-904f-fb7bc37f3746", 00:07:12.238 "strip_size_kb": 64, 00:07:12.238 "state": "online", 00:07:12.238 "raid_level": "raid0", 00:07:12.238 "superblock": false, 00:07:12.238 "num_base_bdevs": 2, 00:07:12.238 "num_base_bdevs_discovered": 2, 00:07:12.238 "num_base_bdevs_operational": 2, 00:07:12.238 "base_bdevs_list": [ 00:07:12.238 { 00:07:12.238 "name": "BaseBdev1", 00:07:12.238 "uuid": "5efdd5a0-1b58-4dc8-87bf-a9577192c4a0", 00:07:12.238 "is_configured": true, 00:07:12.238 "data_offset": 0, 00:07:12.238 "data_size": 65536 00:07:12.238 }, 00:07:12.238 { 00:07:12.238 "name": "BaseBdev2", 00:07:12.238 "uuid": "7932af6c-458a-4b17-986d-ee421203176e", 00:07:12.238 "is_configured": true, 00:07:12.238 "data_offset": 0, 00:07:12.238 "data_size": 65536 00:07:12.238 } 00:07:12.238 ] 00:07:12.238 }' 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.238 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.808 [2024-11-18 23:02:31.927732] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:12.808 "name": "Existed_Raid", 00:07:12.808 "aliases": [ 00:07:12.808 "4d3d91d1-b3f5-459d-904f-fb7bc37f3746" 00:07:12.808 ], 00:07:12.808 "product_name": "Raid Volume", 00:07:12.808 "block_size": 512, 00:07:12.808 "num_blocks": 131072, 00:07:12.808 "uuid": "4d3d91d1-b3f5-459d-904f-fb7bc37f3746", 00:07:12.808 "assigned_rate_limits": { 00:07:12.808 "rw_ios_per_sec": 0, 00:07:12.808 "rw_mbytes_per_sec": 0, 00:07:12.808 "r_mbytes_per_sec": 0, 00:07:12.808 "w_mbytes_per_sec": 0 00:07:12.808 }, 00:07:12.808 "claimed": false, 00:07:12.808 "zoned": false, 00:07:12.808 "supported_io_types": { 00:07:12.808 "read": true, 00:07:12.808 "write": true, 00:07:12.808 "unmap": true, 00:07:12.808 "flush": true, 00:07:12.808 "reset": true, 00:07:12.808 "nvme_admin": false, 00:07:12.808 "nvme_io": false, 00:07:12.808 "nvme_io_md": false, 00:07:12.808 "write_zeroes": true, 00:07:12.808 "zcopy": false, 00:07:12.808 "get_zone_info": false, 00:07:12.808 "zone_management": false, 00:07:12.808 "zone_append": false, 00:07:12.808 "compare": false, 00:07:12.808 "compare_and_write": false, 00:07:12.808 "abort": false, 00:07:12.808 "seek_hole": false, 00:07:12.808 "seek_data": false, 00:07:12.808 "copy": false, 00:07:12.808 "nvme_iov_md": false 00:07:12.808 }, 00:07:12.808 "memory_domains": [ 00:07:12.808 { 00:07:12.808 "dma_device_id": "system", 00:07:12.808 "dma_device_type": 1 00:07:12.808 }, 00:07:12.808 { 00:07:12.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.808 "dma_device_type": 2 00:07:12.808 }, 00:07:12.808 { 00:07:12.808 "dma_device_id": "system", 00:07:12.808 "dma_device_type": 1 00:07:12.808 }, 00:07:12.808 { 00:07:12.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.808 "dma_device_type": 2 00:07:12.808 } 00:07:12.808 ], 00:07:12.808 "driver_specific": { 00:07:12.808 "raid": { 00:07:12.808 "uuid": "4d3d91d1-b3f5-459d-904f-fb7bc37f3746", 00:07:12.808 "strip_size_kb": 64, 00:07:12.808 "state": "online", 00:07:12.808 "raid_level": "raid0", 00:07:12.808 "superblock": false, 00:07:12.808 "num_base_bdevs": 2, 00:07:12.808 "num_base_bdevs_discovered": 2, 00:07:12.808 "num_base_bdevs_operational": 2, 00:07:12.808 "base_bdevs_list": [ 00:07:12.808 { 00:07:12.808 "name": "BaseBdev1", 00:07:12.808 "uuid": "5efdd5a0-1b58-4dc8-87bf-a9577192c4a0", 00:07:12.808 "is_configured": true, 00:07:12.808 "data_offset": 0, 00:07:12.808 "data_size": 65536 00:07:12.808 }, 00:07:12.808 { 00:07:12.808 "name": "BaseBdev2", 00:07:12.808 "uuid": "7932af6c-458a-4b17-986d-ee421203176e", 00:07:12.808 "is_configured": true, 00:07:12.808 "data_offset": 0, 00:07:12.808 "data_size": 65536 00:07:12.808 } 00:07:12.808 ] 00:07:12.808 } 00:07:12.808 } 00:07:12.808 }' 00:07:12.808 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:12.808 BaseBdev2' 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.808 [2024-11-18 23:02:32.151184] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:12.808 [2024-11-18 23:02:32.151211] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.808 [2024-11-18 23:02:32.151255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.808 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.068 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.068 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.068 "name": "Existed_Raid", 00:07:13.068 "uuid": "4d3d91d1-b3f5-459d-904f-fb7bc37f3746", 00:07:13.068 "strip_size_kb": 64, 00:07:13.068 "state": "offline", 00:07:13.068 "raid_level": "raid0", 00:07:13.068 "superblock": false, 00:07:13.068 "num_base_bdevs": 2, 00:07:13.068 "num_base_bdevs_discovered": 1, 00:07:13.068 "num_base_bdevs_operational": 1, 00:07:13.068 "base_bdevs_list": [ 00:07:13.068 { 00:07:13.068 "name": null, 00:07:13.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.068 "is_configured": false, 00:07:13.068 "data_offset": 0, 00:07:13.068 "data_size": 65536 00:07:13.068 }, 00:07:13.068 { 00:07:13.068 "name": "BaseBdev2", 00:07:13.068 "uuid": "7932af6c-458a-4b17-986d-ee421203176e", 00:07:13.068 "is_configured": true, 00:07:13.068 "data_offset": 0, 00:07:13.068 "data_size": 65536 00:07:13.068 } 00:07:13.068 ] 00:07:13.068 }' 00:07:13.068 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.068 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.328 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:13.328 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.328 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.328 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.328 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.328 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:13.328 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.328 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:13.328 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.329 [2024-11-18 23:02:32.629462] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:13.329 [2024-11-18 23:02:32.629564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72095 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72095 ']' 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72095 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:13.329 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.593 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72095 00:07:13.593 killing process with pid 72095 00:07:13.593 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.593 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.593 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72095' 00:07:13.593 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72095 00:07:13.593 [2024-11-18 23:02:32.739354] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.593 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72095 00:07:13.593 [2024-11-18 23:02:32.740341] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.853 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:13.853 00:07:13.853 real 0m3.880s 00:07:13.853 user 0m6.088s 00:07:13.853 sys 0m0.769s 00:07:13.853 ************************************ 00:07:13.853 END TEST raid_state_function_test 00:07:13.853 ************************************ 00:07:13.853 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.853 23:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.853 23:02:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:13.853 23:02:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:13.853 23:02:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.853 23:02:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.853 ************************************ 00:07:13.853 START TEST raid_state_function_test_sb 00:07:13.853 ************************************ 00:07:13.853 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:13.853 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:13.853 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:13.853 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:13.853 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:13.853 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:13.853 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:13.854 Process raid pid: 72337 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72337 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72337' 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72337 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72337 ']' 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.854 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.854 [2024-11-18 23:02:33.133878] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.854 [2024-11-18 23:02:33.134077] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.114 [2024-11-18 23:02:33.284522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.114 [2024-11-18 23:02:33.328489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.114 [2024-11-18 23:02:33.370705] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.114 [2024-11-18 23:02:33.370817] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.684 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.684 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:14.684 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.684 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.684 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.684 [2024-11-18 23:02:33.952381] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.684 [2024-11-18 23:02:33.952490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.684 [2024-11-18 23:02:33.952523] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.685 [2024-11-18 23:02:33.952546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.685 23:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.685 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.685 "name": "Existed_Raid", 00:07:14.685 "uuid": "46e8f752-4049-461f-bcc0-aeef4e435259", 00:07:14.685 "strip_size_kb": 64, 00:07:14.685 "state": "configuring", 00:07:14.685 "raid_level": "raid0", 00:07:14.685 "superblock": true, 00:07:14.685 "num_base_bdevs": 2, 00:07:14.685 "num_base_bdevs_discovered": 0, 00:07:14.685 "num_base_bdevs_operational": 2, 00:07:14.685 "base_bdevs_list": [ 00:07:14.685 { 00:07:14.685 "name": "BaseBdev1", 00:07:14.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.685 "is_configured": false, 00:07:14.685 "data_offset": 0, 00:07:14.685 "data_size": 0 00:07:14.685 }, 00:07:14.685 { 00:07:14.685 "name": "BaseBdev2", 00:07:14.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.685 "is_configured": false, 00:07:14.685 "data_offset": 0, 00:07:14.685 "data_size": 0 00:07:14.685 } 00:07:14.685 ] 00:07:14.685 }' 00:07:14.685 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.685 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.255 [2024-11-18 23:02:34.391517] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.255 [2024-11-18 23:02:34.391557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.255 [2024-11-18 23:02:34.403540] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.255 [2024-11-18 23:02:34.403613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.255 [2024-11-18 23:02:34.403639] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.255 [2024-11-18 23:02:34.403660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.255 [2024-11-18 23:02:34.424371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.255 BaseBdev1 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.255 [ 00:07:15.255 { 00:07:15.255 "name": "BaseBdev1", 00:07:15.255 "aliases": [ 00:07:15.255 "c179f4af-ab7d-4b66-ad1f-4428f581f866" 00:07:15.255 ], 00:07:15.255 "product_name": "Malloc disk", 00:07:15.255 "block_size": 512, 00:07:15.255 "num_blocks": 65536, 00:07:15.255 "uuid": "c179f4af-ab7d-4b66-ad1f-4428f581f866", 00:07:15.255 "assigned_rate_limits": { 00:07:15.255 "rw_ios_per_sec": 0, 00:07:15.255 "rw_mbytes_per_sec": 0, 00:07:15.255 "r_mbytes_per_sec": 0, 00:07:15.255 "w_mbytes_per_sec": 0 00:07:15.255 }, 00:07:15.255 "claimed": true, 00:07:15.255 "claim_type": "exclusive_write", 00:07:15.255 "zoned": false, 00:07:15.255 "supported_io_types": { 00:07:15.255 "read": true, 00:07:15.255 "write": true, 00:07:15.255 "unmap": true, 00:07:15.255 "flush": true, 00:07:15.255 "reset": true, 00:07:15.255 "nvme_admin": false, 00:07:15.255 "nvme_io": false, 00:07:15.255 "nvme_io_md": false, 00:07:15.255 "write_zeroes": true, 00:07:15.255 "zcopy": true, 00:07:15.255 "get_zone_info": false, 00:07:15.255 "zone_management": false, 00:07:15.255 "zone_append": false, 00:07:15.255 "compare": false, 00:07:15.255 "compare_and_write": false, 00:07:15.255 "abort": true, 00:07:15.255 "seek_hole": false, 00:07:15.255 "seek_data": false, 00:07:15.255 "copy": true, 00:07:15.255 "nvme_iov_md": false 00:07:15.255 }, 00:07:15.255 "memory_domains": [ 00:07:15.255 { 00:07:15.255 "dma_device_id": "system", 00:07:15.255 "dma_device_type": 1 00:07:15.255 }, 00:07:15.255 { 00:07:15.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.255 "dma_device_type": 2 00:07:15.255 } 00:07:15.255 ], 00:07:15.255 "driver_specific": {} 00:07:15.255 } 00:07:15.255 ] 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.255 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.255 "name": "Existed_Raid", 00:07:15.255 "uuid": "1135e571-de5e-4e54-9be7-95103152e3bd", 00:07:15.255 "strip_size_kb": 64, 00:07:15.255 "state": "configuring", 00:07:15.255 "raid_level": "raid0", 00:07:15.255 "superblock": true, 00:07:15.255 "num_base_bdevs": 2, 00:07:15.255 "num_base_bdevs_discovered": 1, 00:07:15.255 "num_base_bdevs_operational": 2, 00:07:15.255 "base_bdevs_list": [ 00:07:15.255 { 00:07:15.255 "name": "BaseBdev1", 00:07:15.255 "uuid": "c179f4af-ab7d-4b66-ad1f-4428f581f866", 00:07:15.255 "is_configured": true, 00:07:15.255 "data_offset": 2048, 00:07:15.255 "data_size": 63488 00:07:15.255 }, 00:07:15.255 { 00:07:15.255 "name": "BaseBdev2", 00:07:15.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.256 "is_configured": false, 00:07:15.256 "data_offset": 0, 00:07:15.256 "data_size": 0 00:07:15.256 } 00:07:15.256 ] 00:07:15.256 }' 00:07:15.256 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.256 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.833 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.834 [2024-11-18 23:02:34.923546] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.834 [2024-11-18 23:02:34.923593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.834 [2024-11-18 23:02:34.935570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.834 [2024-11-18 23:02:34.937438] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.834 [2024-11-18 23:02:34.937475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.834 "name": "Existed_Raid", 00:07:15.834 "uuid": "d14a7047-f690-4612-bc07-fa6234d44fb1", 00:07:15.834 "strip_size_kb": 64, 00:07:15.834 "state": "configuring", 00:07:15.834 "raid_level": "raid0", 00:07:15.834 "superblock": true, 00:07:15.834 "num_base_bdevs": 2, 00:07:15.834 "num_base_bdevs_discovered": 1, 00:07:15.834 "num_base_bdevs_operational": 2, 00:07:15.834 "base_bdevs_list": [ 00:07:15.834 { 00:07:15.834 "name": "BaseBdev1", 00:07:15.834 "uuid": "c179f4af-ab7d-4b66-ad1f-4428f581f866", 00:07:15.834 "is_configured": true, 00:07:15.834 "data_offset": 2048, 00:07:15.834 "data_size": 63488 00:07:15.834 }, 00:07:15.834 { 00:07:15.834 "name": "BaseBdev2", 00:07:15.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.834 "is_configured": false, 00:07:15.834 "data_offset": 0, 00:07:15.834 "data_size": 0 00:07:15.834 } 00:07:15.834 ] 00:07:15.834 }' 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.834 23:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.095 [2024-11-18 23:02:35.362129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:16.095 [2024-11-18 23:02:35.362638] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:16.095 [2024-11-18 23:02:35.362748] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:16.095 BaseBdev2 00:07:16.095 [2024-11-18 23:02:35.363198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:16.095 [2024-11-18 23:02:35.363394] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:16.095 [2024-11-18 23:02:35.363456] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:16.095 [2024-11-18 23:02:35.363687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.095 [ 00:07:16.095 { 00:07:16.095 "name": "BaseBdev2", 00:07:16.095 "aliases": [ 00:07:16.095 "5fd8545d-7115-410e-9298-784e8236e7fb" 00:07:16.095 ], 00:07:16.095 "product_name": "Malloc disk", 00:07:16.095 "block_size": 512, 00:07:16.095 "num_blocks": 65536, 00:07:16.095 "uuid": "5fd8545d-7115-410e-9298-784e8236e7fb", 00:07:16.095 "assigned_rate_limits": { 00:07:16.095 "rw_ios_per_sec": 0, 00:07:16.095 "rw_mbytes_per_sec": 0, 00:07:16.095 "r_mbytes_per_sec": 0, 00:07:16.095 "w_mbytes_per_sec": 0 00:07:16.095 }, 00:07:16.095 "claimed": true, 00:07:16.095 "claim_type": "exclusive_write", 00:07:16.095 "zoned": false, 00:07:16.095 "supported_io_types": { 00:07:16.095 "read": true, 00:07:16.095 "write": true, 00:07:16.095 "unmap": true, 00:07:16.095 "flush": true, 00:07:16.095 "reset": true, 00:07:16.095 "nvme_admin": false, 00:07:16.095 "nvme_io": false, 00:07:16.095 "nvme_io_md": false, 00:07:16.095 "write_zeroes": true, 00:07:16.095 "zcopy": true, 00:07:16.095 "get_zone_info": false, 00:07:16.095 "zone_management": false, 00:07:16.095 "zone_append": false, 00:07:16.095 "compare": false, 00:07:16.095 "compare_and_write": false, 00:07:16.095 "abort": true, 00:07:16.095 "seek_hole": false, 00:07:16.095 "seek_data": false, 00:07:16.095 "copy": true, 00:07:16.095 "nvme_iov_md": false 00:07:16.095 }, 00:07:16.095 "memory_domains": [ 00:07:16.095 { 00:07:16.095 "dma_device_id": "system", 00:07:16.095 "dma_device_type": 1 00:07:16.095 }, 00:07:16.095 { 00:07:16.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.095 "dma_device_type": 2 00:07:16.095 } 00:07:16.095 ], 00:07:16.095 "driver_specific": {} 00:07:16.095 } 00:07:16.095 ] 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.095 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.095 "name": "Existed_Raid", 00:07:16.096 "uuid": "d14a7047-f690-4612-bc07-fa6234d44fb1", 00:07:16.096 "strip_size_kb": 64, 00:07:16.096 "state": "online", 00:07:16.096 "raid_level": "raid0", 00:07:16.096 "superblock": true, 00:07:16.096 "num_base_bdevs": 2, 00:07:16.096 "num_base_bdevs_discovered": 2, 00:07:16.096 "num_base_bdevs_operational": 2, 00:07:16.096 "base_bdevs_list": [ 00:07:16.096 { 00:07:16.096 "name": "BaseBdev1", 00:07:16.096 "uuid": "c179f4af-ab7d-4b66-ad1f-4428f581f866", 00:07:16.096 "is_configured": true, 00:07:16.096 "data_offset": 2048, 00:07:16.096 "data_size": 63488 00:07:16.096 }, 00:07:16.096 { 00:07:16.096 "name": "BaseBdev2", 00:07:16.096 "uuid": "5fd8545d-7115-410e-9298-784e8236e7fb", 00:07:16.096 "is_configured": true, 00:07:16.096 "data_offset": 2048, 00:07:16.096 "data_size": 63488 00:07:16.096 } 00:07:16.096 ] 00:07:16.096 }' 00:07:16.096 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.096 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.670 [2024-11-18 23:02:35.893522] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:16.670 "name": "Existed_Raid", 00:07:16.670 "aliases": [ 00:07:16.670 "d14a7047-f690-4612-bc07-fa6234d44fb1" 00:07:16.670 ], 00:07:16.670 "product_name": "Raid Volume", 00:07:16.670 "block_size": 512, 00:07:16.670 "num_blocks": 126976, 00:07:16.670 "uuid": "d14a7047-f690-4612-bc07-fa6234d44fb1", 00:07:16.670 "assigned_rate_limits": { 00:07:16.670 "rw_ios_per_sec": 0, 00:07:16.670 "rw_mbytes_per_sec": 0, 00:07:16.670 "r_mbytes_per_sec": 0, 00:07:16.670 "w_mbytes_per_sec": 0 00:07:16.670 }, 00:07:16.670 "claimed": false, 00:07:16.670 "zoned": false, 00:07:16.670 "supported_io_types": { 00:07:16.670 "read": true, 00:07:16.670 "write": true, 00:07:16.670 "unmap": true, 00:07:16.670 "flush": true, 00:07:16.670 "reset": true, 00:07:16.670 "nvme_admin": false, 00:07:16.670 "nvme_io": false, 00:07:16.670 "nvme_io_md": false, 00:07:16.670 "write_zeroes": true, 00:07:16.670 "zcopy": false, 00:07:16.670 "get_zone_info": false, 00:07:16.670 "zone_management": false, 00:07:16.670 "zone_append": false, 00:07:16.670 "compare": false, 00:07:16.670 "compare_and_write": false, 00:07:16.670 "abort": false, 00:07:16.670 "seek_hole": false, 00:07:16.670 "seek_data": false, 00:07:16.670 "copy": false, 00:07:16.670 "nvme_iov_md": false 00:07:16.670 }, 00:07:16.670 "memory_domains": [ 00:07:16.670 { 00:07:16.670 "dma_device_id": "system", 00:07:16.670 "dma_device_type": 1 00:07:16.670 }, 00:07:16.670 { 00:07:16.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.670 "dma_device_type": 2 00:07:16.670 }, 00:07:16.670 { 00:07:16.670 "dma_device_id": "system", 00:07:16.670 "dma_device_type": 1 00:07:16.670 }, 00:07:16.670 { 00:07:16.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.670 "dma_device_type": 2 00:07:16.670 } 00:07:16.670 ], 00:07:16.670 "driver_specific": { 00:07:16.670 "raid": { 00:07:16.670 "uuid": "d14a7047-f690-4612-bc07-fa6234d44fb1", 00:07:16.670 "strip_size_kb": 64, 00:07:16.670 "state": "online", 00:07:16.670 "raid_level": "raid0", 00:07:16.670 "superblock": true, 00:07:16.670 "num_base_bdevs": 2, 00:07:16.670 "num_base_bdevs_discovered": 2, 00:07:16.670 "num_base_bdevs_operational": 2, 00:07:16.670 "base_bdevs_list": [ 00:07:16.670 { 00:07:16.670 "name": "BaseBdev1", 00:07:16.670 "uuid": "c179f4af-ab7d-4b66-ad1f-4428f581f866", 00:07:16.670 "is_configured": true, 00:07:16.670 "data_offset": 2048, 00:07:16.670 "data_size": 63488 00:07:16.670 }, 00:07:16.670 { 00:07:16.670 "name": "BaseBdev2", 00:07:16.670 "uuid": "5fd8545d-7115-410e-9298-784e8236e7fb", 00:07:16.670 "is_configured": true, 00:07:16.670 "data_offset": 2048, 00:07:16.670 "data_size": 63488 00:07:16.670 } 00:07:16.670 ] 00:07:16.670 } 00:07:16.670 } 00:07:16.670 }' 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:16.670 BaseBdev2' 00:07:16.670 23:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.670 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.670 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.670 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.670 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:16.670 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.670 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.929 [2024-11-18 23:02:36.132850] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:16.929 [2024-11-18 23:02:36.132925] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.929 [2024-11-18 23:02:36.133001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:16.929 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.930 "name": "Existed_Raid", 00:07:16.930 "uuid": "d14a7047-f690-4612-bc07-fa6234d44fb1", 00:07:16.930 "strip_size_kb": 64, 00:07:16.930 "state": "offline", 00:07:16.930 "raid_level": "raid0", 00:07:16.930 "superblock": true, 00:07:16.930 "num_base_bdevs": 2, 00:07:16.930 "num_base_bdevs_discovered": 1, 00:07:16.930 "num_base_bdevs_operational": 1, 00:07:16.930 "base_bdevs_list": [ 00:07:16.930 { 00:07:16.930 "name": null, 00:07:16.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.930 "is_configured": false, 00:07:16.930 "data_offset": 0, 00:07:16.930 "data_size": 63488 00:07:16.930 }, 00:07:16.930 { 00:07:16.930 "name": "BaseBdev2", 00:07:16.930 "uuid": "5fd8545d-7115-410e-9298-784e8236e7fb", 00:07:16.930 "is_configured": true, 00:07:16.930 "data_offset": 2048, 00:07:16.930 "data_size": 63488 00:07:16.930 } 00:07:16.930 ] 00:07:16.930 }' 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.930 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.189 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:17.190 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.190 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.190 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.190 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.190 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.449 [2024-11-18 23:02:36.615545] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:17.449 [2024-11-18 23:02:36.615638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72337 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72337 ']' 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72337 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72337 00:07:17.449 killing process with pid 72337 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72337' 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72337 00:07:17.449 [2024-11-18 23:02:36.709538] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.449 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72337 00:07:17.449 [2024-11-18 23:02:36.710493] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.709 ************************************ 00:07:17.709 END TEST raid_state_function_test_sb 00:07:17.709 ************************************ 00:07:17.709 23:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:17.709 00:07:17.709 real 0m3.909s 00:07:17.709 user 0m6.176s 00:07:17.709 sys 0m0.751s 00:07:17.709 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.709 23:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.709 23:02:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:17.710 23:02:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:17.710 23:02:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.710 23:02:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.710 ************************************ 00:07:17.710 START TEST raid_superblock_test 00:07:17.710 ************************************ 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72578 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72578 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72578 ']' 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.710 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.969 [2024-11-18 23:02:37.109660] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:17.969 [2024-11-18 23:02:37.109806] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72578 ] 00:07:17.969 [2024-11-18 23:02:37.270048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.969 [2024-11-18 23:02:37.316306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.229 [2024-11-18 23:02:37.358603] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.229 [2024-11-18 23:02:37.358720] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.800 malloc1 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.800 [2024-11-18 23:02:37.952698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:18.800 [2024-11-18 23:02:37.952832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.800 [2024-11-18 23:02:37.952873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:18.800 [2024-11-18 23:02:37.952908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.800 [2024-11-18 23:02:37.955015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.800 [2024-11-18 23:02:37.955084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:18.800 pt1 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.800 malloc2 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.800 [2024-11-18 23:02:37.992601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:18.800 [2024-11-18 23:02:37.992708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.800 [2024-11-18 23:02:37.992731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:18.800 [2024-11-18 23:02:37.992743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.800 [2024-11-18 23:02:37.995187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.800 [2024-11-18 23:02:37.995221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:18.800 pt2 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.800 23:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.800 [2024-11-18 23:02:38.004605] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:18.800 [2024-11-18 23:02:38.006376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:18.800 [2024-11-18 23:02:38.006501] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:18.800 [2024-11-18 23:02:38.006520] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:18.800 [2024-11-18 23:02:38.006764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:18.800 [2024-11-18 23:02:38.006875] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:18.800 [2024-11-18 23:02:38.006884] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:18.800 [2024-11-18 23:02:38.007015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.800 "name": "raid_bdev1", 00:07:18.800 "uuid": "7fb7efae-4f20-486e-9b2b-cf7cc32ad65d", 00:07:18.800 "strip_size_kb": 64, 00:07:18.800 "state": "online", 00:07:18.800 "raid_level": "raid0", 00:07:18.800 "superblock": true, 00:07:18.800 "num_base_bdevs": 2, 00:07:18.800 "num_base_bdevs_discovered": 2, 00:07:18.800 "num_base_bdevs_operational": 2, 00:07:18.800 "base_bdevs_list": [ 00:07:18.800 { 00:07:18.800 "name": "pt1", 00:07:18.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:18.800 "is_configured": true, 00:07:18.800 "data_offset": 2048, 00:07:18.800 "data_size": 63488 00:07:18.800 }, 00:07:18.800 { 00:07:18.800 "name": "pt2", 00:07:18.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.800 "is_configured": true, 00:07:18.800 "data_offset": 2048, 00:07:18.800 "data_size": 63488 00:07:18.800 } 00:07:18.800 ] 00:07:18.800 }' 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.800 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.370 [2024-11-18 23:02:38.448066] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.370 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:19.370 "name": "raid_bdev1", 00:07:19.370 "aliases": [ 00:07:19.370 "7fb7efae-4f20-486e-9b2b-cf7cc32ad65d" 00:07:19.370 ], 00:07:19.370 "product_name": "Raid Volume", 00:07:19.370 "block_size": 512, 00:07:19.370 "num_blocks": 126976, 00:07:19.370 "uuid": "7fb7efae-4f20-486e-9b2b-cf7cc32ad65d", 00:07:19.370 "assigned_rate_limits": { 00:07:19.370 "rw_ios_per_sec": 0, 00:07:19.370 "rw_mbytes_per_sec": 0, 00:07:19.370 "r_mbytes_per_sec": 0, 00:07:19.370 "w_mbytes_per_sec": 0 00:07:19.370 }, 00:07:19.370 "claimed": false, 00:07:19.370 "zoned": false, 00:07:19.370 "supported_io_types": { 00:07:19.370 "read": true, 00:07:19.370 "write": true, 00:07:19.370 "unmap": true, 00:07:19.370 "flush": true, 00:07:19.370 "reset": true, 00:07:19.370 "nvme_admin": false, 00:07:19.370 "nvme_io": false, 00:07:19.370 "nvme_io_md": false, 00:07:19.370 "write_zeroes": true, 00:07:19.370 "zcopy": false, 00:07:19.370 "get_zone_info": false, 00:07:19.370 "zone_management": false, 00:07:19.370 "zone_append": false, 00:07:19.370 "compare": false, 00:07:19.370 "compare_and_write": false, 00:07:19.370 "abort": false, 00:07:19.370 "seek_hole": false, 00:07:19.370 "seek_data": false, 00:07:19.370 "copy": false, 00:07:19.370 "nvme_iov_md": false 00:07:19.370 }, 00:07:19.370 "memory_domains": [ 00:07:19.370 { 00:07:19.370 "dma_device_id": "system", 00:07:19.370 "dma_device_type": 1 00:07:19.370 }, 00:07:19.370 { 00:07:19.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.370 "dma_device_type": 2 00:07:19.370 }, 00:07:19.370 { 00:07:19.370 "dma_device_id": "system", 00:07:19.370 "dma_device_type": 1 00:07:19.370 }, 00:07:19.370 { 00:07:19.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.370 "dma_device_type": 2 00:07:19.370 } 00:07:19.370 ], 00:07:19.370 "driver_specific": { 00:07:19.370 "raid": { 00:07:19.370 "uuid": "7fb7efae-4f20-486e-9b2b-cf7cc32ad65d", 00:07:19.370 "strip_size_kb": 64, 00:07:19.370 "state": "online", 00:07:19.370 "raid_level": "raid0", 00:07:19.370 "superblock": true, 00:07:19.370 "num_base_bdevs": 2, 00:07:19.370 "num_base_bdevs_discovered": 2, 00:07:19.370 "num_base_bdevs_operational": 2, 00:07:19.370 "base_bdevs_list": [ 00:07:19.370 { 00:07:19.370 "name": "pt1", 00:07:19.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:19.370 "is_configured": true, 00:07:19.370 "data_offset": 2048, 00:07:19.370 "data_size": 63488 00:07:19.370 }, 00:07:19.370 { 00:07:19.370 "name": "pt2", 00:07:19.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.370 "is_configured": true, 00:07:19.370 "data_offset": 2048, 00:07:19.370 "data_size": 63488 00:07:19.370 } 00:07:19.370 ] 00:07:19.370 } 00:07:19.370 } 00:07:19.370 }' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:19.371 pt2' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.371 [2024-11-18 23:02:38.651651] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7fb7efae-4f20-486e-9b2b-cf7cc32ad65d 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7fb7efae-4f20-486e-9b2b-cf7cc32ad65d ']' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.371 [2024-11-18 23:02:38.719331] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.371 [2024-11-18 23:02:38.719358] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.371 [2024-11-18 23:02:38.719428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.371 [2024-11-18 23:02:38.719474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.371 [2024-11-18 23:02:38.719490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.371 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.632 [2024-11-18 23:02:38.855171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:19.632 [2024-11-18 23:02:38.856984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:19.632 [2024-11-18 23:02:38.857052] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:19.632 [2024-11-18 23:02:38.857095] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:19.632 [2024-11-18 23:02:38.857111] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.632 [2024-11-18 23:02:38.857125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:19.632 request: 00:07:19.632 { 00:07:19.632 "name": "raid_bdev1", 00:07:19.632 "raid_level": "raid0", 00:07:19.632 "base_bdevs": [ 00:07:19.632 "malloc1", 00:07:19.632 "malloc2" 00:07:19.632 ], 00:07:19.632 "strip_size_kb": 64, 00:07:19.632 "superblock": false, 00:07:19.632 "method": "bdev_raid_create", 00:07:19.632 "req_id": 1 00:07:19.632 } 00:07:19.632 Got JSON-RPC error response 00:07:19.632 response: 00:07:19.632 { 00:07:19.632 "code": -17, 00:07:19.632 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:19.632 } 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.632 [2024-11-18 23:02:38.903047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:19.632 [2024-11-18 23:02:38.903134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.632 [2024-11-18 23:02:38.903187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:19.632 [2024-11-18 23:02:38.903220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.632 [2024-11-18 23:02:38.905361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.632 [2024-11-18 23:02:38.905429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:19.632 [2024-11-18 23:02:38.905519] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:19.632 [2024-11-18 23:02:38.905592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:19.632 pt1 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.632 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.633 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.633 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.633 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.633 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.633 "name": "raid_bdev1", 00:07:19.633 "uuid": "7fb7efae-4f20-486e-9b2b-cf7cc32ad65d", 00:07:19.633 "strip_size_kb": 64, 00:07:19.633 "state": "configuring", 00:07:19.633 "raid_level": "raid0", 00:07:19.633 "superblock": true, 00:07:19.633 "num_base_bdevs": 2, 00:07:19.633 "num_base_bdevs_discovered": 1, 00:07:19.633 "num_base_bdevs_operational": 2, 00:07:19.633 "base_bdevs_list": [ 00:07:19.633 { 00:07:19.633 "name": "pt1", 00:07:19.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:19.633 "is_configured": true, 00:07:19.633 "data_offset": 2048, 00:07:19.633 "data_size": 63488 00:07:19.633 }, 00:07:19.633 { 00:07:19.633 "name": null, 00:07:19.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.633 "is_configured": false, 00:07:19.633 "data_offset": 2048, 00:07:19.633 "data_size": 63488 00:07:19.633 } 00:07:19.633 ] 00:07:19.633 }' 00:07:19.633 23:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.633 23:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.203 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:20.203 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:20.203 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:20.203 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.204 [2024-11-18 23:02:39.338349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:20.204 [2024-11-18 23:02:39.338439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.204 [2024-11-18 23:02:39.338495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:20.204 [2024-11-18 23:02:39.338529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.204 [2024-11-18 23:02:39.338919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.204 [2024-11-18 23:02:39.338977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:20.204 [2024-11-18 23:02:39.339068] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:20.204 [2024-11-18 23:02:39.339092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:20.204 [2024-11-18 23:02:39.339182] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:20.204 [2024-11-18 23:02:39.339191] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:20.204 [2024-11-18 23:02:39.339434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:20.204 [2024-11-18 23:02:39.339541] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:20.204 [2024-11-18 23:02:39.339556] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:20.204 [2024-11-18 23:02:39.339646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.204 pt2 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.204 "name": "raid_bdev1", 00:07:20.204 "uuid": "7fb7efae-4f20-486e-9b2b-cf7cc32ad65d", 00:07:20.204 "strip_size_kb": 64, 00:07:20.204 "state": "online", 00:07:20.204 "raid_level": "raid0", 00:07:20.204 "superblock": true, 00:07:20.204 "num_base_bdevs": 2, 00:07:20.204 "num_base_bdevs_discovered": 2, 00:07:20.204 "num_base_bdevs_operational": 2, 00:07:20.204 "base_bdevs_list": [ 00:07:20.204 { 00:07:20.204 "name": "pt1", 00:07:20.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.204 "is_configured": true, 00:07:20.204 "data_offset": 2048, 00:07:20.204 "data_size": 63488 00:07:20.204 }, 00:07:20.204 { 00:07:20.204 "name": "pt2", 00:07:20.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.204 "is_configured": true, 00:07:20.204 "data_offset": 2048, 00:07:20.204 "data_size": 63488 00:07:20.204 } 00:07:20.204 ] 00:07:20.204 }' 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.204 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.464 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:20.464 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:20.464 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.464 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.464 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.464 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.464 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:20.464 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.464 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.464 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.464 [2024-11-18 23:02:39.797774] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.464 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.724 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:20.724 "name": "raid_bdev1", 00:07:20.724 "aliases": [ 00:07:20.724 "7fb7efae-4f20-486e-9b2b-cf7cc32ad65d" 00:07:20.724 ], 00:07:20.724 "product_name": "Raid Volume", 00:07:20.724 "block_size": 512, 00:07:20.724 "num_blocks": 126976, 00:07:20.724 "uuid": "7fb7efae-4f20-486e-9b2b-cf7cc32ad65d", 00:07:20.724 "assigned_rate_limits": { 00:07:20.724 "rw_ios_per_sec": 0, 00:07:20.724 "rw_mbytes_per_sec": 0, 00:07:20.724 "r_mbytes_per_sec": 0, 00:07:20.724 "w_mbytes_per_sec": 0 00:07:20.724 }, 00:07:20.724 "claimed": false, 00:07:20.724 "zoned": false, 00:07:20.724 "supported_io_types": { 00:07:20.724 "read": true, 00:07:20.724 "write": true, 00:07:20.724 "unmap": true, 00:07:20.724 "flush": true, 00:07:20.724 "reset": true, 00:07:20.724 "nvme_admin": false, 00:07:20.724 "nvme_io": false, 00:07:20.724 "nvme_io_md": false, 00:07:20.724 "write_zeroes": true, 00:07:20.724 "zcopy": false, 00:07:20.724 "get_zone_info": false, 00:07:20.724 "zone_management": false, 00:07:20.724 "zone_append": false, 00:07:20.724 "compare": false, 00:07:20.724 "compare_and_write": false, 00:07:20.724 "abort": false, 00:07:20.724 "seek_hole": false, 00:07:20.724 "seek_data": false, 00:07:20.724 "copy": false, 00:07:20.724 "nvme_iov_md": false 00:07:20.724 }, 00:07:20.724 "memory_domains": [ 00:07:20.724 { 00:07:20.724 "dma_device_id": "system", 00:07:20.724 "dma_device_type": 1 00:07:20.724 }, 00:07:20.724 { 00:07:20.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.724 "dma_device_type": 2 00:07:20.724 }, 00:07:20.724 { 00:07:20.724 "dma_device_id": "system", 00:07:20.724 "dma_device_type": 1 00:07:20.724 }, 00:07:20.724 { 00:07:20.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.725 "dma_device_type": 2 00:07:20.725 } 00:07:20.725 ], 00:07:20.725 "driver_specific": { 00:07:20.725 "raid": { 00:07:20.725 "uuid": "7fb7efae-4f20-486e-9b2b-cf7cc32ad65d", 00:07:20.725 "strip_size_kb": 64, 00:07:20.725 "state": "online", 00:07:20.725 "raid_level": "raid0", 00:07:20.725 "superblock": true, 00:07:20.725 "num_base_bdevs": 2, 00:07:20.725 "num_base_bdevs_discovered": 2, 00:07:20.725 "num_base_bdevs_operational": 2, 00:07:20.725 "base_bdevs_list": [ 00:07:20.725 { 00:07:20.725 "name": "pt1", 00:07:20.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.725 "is_configured": true, 00:07:20.725 "data_offset": 2048, 00:07:20.725 "data_size": 63488 00:07:20.725 }, 00:07:20.725 { 00:07:20.725 "name": "pt2", 00:07:20.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.725 "is_configured": true, 00:07:20.725 "data_offset": 2048, 00:07:20.725 "data_size": 63488 00:07:20.725 } 00:07:20.725 ] 00:07:20.725 } 00:07:20.725 } 00:07:20.725 }' 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:20.725 pt2' 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.725 23:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.725 [2024-11-18 23:02:40.057309] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7fb7efae-4f20-486e-9b2b-cf7cc32ad65d '!=' 7fb7efae-4f20-486e-9b2b-cf7cc32ad65d ']' 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72578 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72578 ']' 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72578 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.725 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72578 00:07:20.985 killing process with pid 72578 00:07:20.985 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.985 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.985 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72578' 00:07:20.985 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72578 00:07:20.985 [2024-11-18 23:02:40.124430] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.985 [2024-11-18 23:02:40.124499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.985 [2024-11-18 23:02:40.124546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.985 [2024-11-18 23:02:40.124554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:20.985 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72578 00:07:20.985 [2024-11-18 23:02:40.147109] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.245 ************************************ 00:07:21.245 END TEST raid_superblock_test 00:07:21.245 ************************************ 00:07:21.245 23:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:21.245 00:07:21.245 real 0m3.369s 00:07:21.245 user 0m5.201s 00:07:21.245 sys 0m0.685s 00:07:21.245 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.245 23:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.245 23:02:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:21.245 23:02:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:21.245 23:02:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.245 23:02:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.245 ************************************ 00:07:21.245 START TEST raid_read_error_test 00:07:21.245 ************************************ 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DHyGhaZJvZ 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72773 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72773 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72773 ']' 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.245 23:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.246 23:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.246 [2024-11-18 23:02:40.553985] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:21.246 [2024-11-18 23:02:40.554191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72773 ] 00:07:21.505 [2024-11-18 23:02:40.703324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.505 [2024-11-18 23:02:40.746703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.505 [2024-11-18 23:02:40.789400] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.505 [2024-11-18 23:02:40.789497] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.076 BaseBdev1_malloc 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.076 true 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.076 [2024-11-18 23:02:41.396204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:22.076 [2024-11-18 23:02:41.396258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.076 [2024-11-18 23:02:41.396292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:22.076 [2024-11-18 23:02:41.396319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.076 [2024-11-18 23:02:41.398397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.076 [2024-11-18 23:02:41.398429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:22.076 BaseBdev1 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.076 BaseBdev2_malloc 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.076 true 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.076 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.076 [2024-11-18 23:02:41.447029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:22.076 [2024-11-18 23:02:41.447074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.076 [2024-11-18 23:02:41.447107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:22.076 [2024-11-18 23:02:41.447115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.076 [2024-11-18 23:02:41.449161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.076 [2024-11-18 23:02:41.449195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:22.335 BaseBdev2 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.335 [2024-11-18 23:02:41.459048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.335 [2024-11-18 23:02:41.460886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:22.335 [2024-11-18 23:02:41.461084] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:22.335 [2024-11-18 23:02:41.461096] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:22.335 [2024-11-18 23:02:41.461341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:22.335 [2024-11-18 23:02:41.461464] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:22.335 [2024-11-18 23:02:41.461496] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:22.335 [2024-11-18 23:02:41.461637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.335 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.335 "name": "raid_bdev1", 00:07:22.335 "uuid": "270a933b-86aa-44b5-9100-f72c79d2f2e3", 00:07:22.335 "strip_size_kb": 64, 00:07:22.335 "state": "online", 00:07:22.335 "raid_level": "raid0", 00:07:22.335 "superblock": true, 00:07:22.335 "num_base_bdevs": 2, 00:07:22.335 "num_base_bdevs_discovered": 2, 00:07:22.335 "num_base_bdevs_operational": 2, 00:07:22.335 "base_bdevs_list": [ 00:07:22.335 { 00:07:22.335 "name": "BaseBdev1", 00:07:22.335 "uuid": "56495f0a-c1cd-5492-ba64-83c57cf39c4f", 00:07:22.335 "is_configured": true, 00:07:22.335 "data_offset": 2048, 00:07:22.336 "data_size": 63488 00:07:22.336 }, 00:07:22.336 { 00:07:22.336 "name": "BaseBdev2", 00:07:22.336 "uuid": "a533b9fc-cd11-5380-b444-1dbce92191ca", 00:07:22.336 "is_configured": true, 00:07:22.336 "data_offset": 2048, 00:07:22.336 "data_size": 63488 00:07:22.336 } 00:07:22.336 ] 00:07:22.336 }' 00:07:22.336 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.336 23:02:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.595 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:22.595 23:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:22.855 [2024-11-18 23:02:42.002501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.793 "name": "raid_bdev1", 00:07:23.793 "uuid": "270a933b-86aa-44b5-9100-f72c79d2f2e3", 00:07:23.793 "strip_size_kb": 64, 00:07:23.793 "state": "online", 00:07:23.793 "raid_level": "raid0", 00:07:23.793 "superblock": true, 00:07:23.793 "num_base_bdevs": 2, 00:07:23.793 "num_base_bdevs_discovered": 2, 00:07:23.793 "num_base_bdevs_operational": 2, 00:07:23.793 "base_bdevs_list": [ 00:07:23.793 { 00:07:23.793 "name": "BaseBdev1", 00:07:23.793 "uuid": "56495f0a-c1cd-5492-ba64-83c57cf39c4f", 00:07:23.793 "is_configured": true, 00:07:23.793 "data_offset": 2048, 00:07:23.793 "data_size": 63488 00:07:23.793 }, 00:07:23.793 { 00:07:23.793 "name": "BaseBdev2", 00:07:23.793 "uuid": "a533b9fc-cd11-5380-b444-1dbce92191ca", 00:07:23.793 "is_configured": true, 00:07:23.793 "data_offset": 2048, 00:07:23.793 "data_size": 63488 00:07:23.793 } 00:07:23.793 ] 00:07:23.793 }' 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.793 23:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.053 [2024-11-18 23:02:43.366072] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:24.053 [2024-11-18 23:02:43.366103] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.053 [2024-11-18 23:02:43.368558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.053 [2024-11-18 23:02:43.368595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.053 [2024-11-18 23:02:43.368627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.053 [2024-11-18 23:02:43.368637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:24.053 { 00:07:24.053 "results": [ 00:07:24.053 { 00:07:24.053 "job": "raid_bdev1", 00:07:24.053 "core_mask": "0x1", 00:07:24.053 "workload": "randrw", 00:07:24.053 "percentage": 50, 00:07:24.053 "status": "finished", 00:07:24.053 "queue_depth": 1, 00:07:24.053 "io_size": 131072, 00:07:24.053 "runtime": 1.364392, 00:07:24.053 "iops": 18054.92849562296, 00:07:24.053 "mibps": 2256.86606195287, 00:07:24.053 "io_failed": 1, 00:07:24.053 "io_timeout": 0, 00:07:24.053 "avg_latency_us": 76.5972276104488, 00:07:24.053 "min_latency_us": 24.258515283842794, 00:07:24.053 "max_latency_us": 1366.5257641921398 00:07:24.053 } 00:07:24.053 ], 00:07:24.053 "core_count": 1 00:07:24.053 } 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72773 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72773 ']' 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72773 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72773 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72773' 00:07:24.053 killing process with pid 72773 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72773 00:07:24.053 [2024-11-18 23:02:43.404074] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.053 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72773 00:07:24.053 [2024-11-18 23:02:43.419394] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.313 23:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DHyGhaZJvZ 00:07:24.313 23:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:24.313 23:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:24.313 23:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:24.313 23:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:24.313 ************************************ 00:07:24.313 END TEST raid_read_error_test 00:07:24.313 ************************************ 00:07:24.313 23:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.313 23:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.313 23:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:24.313 00:07:24.313 real 0m3.202s 00:07:24.313 user 0m4.092s 00:07:24.313 sys 0m0.471s 00:07:24.313 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.313 23:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.572 23:02:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:24.572 23:02:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:24.572 23:02:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.572 23:02:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.572 ************************************ 00:07:24.572 START TEST raid_write_error_test 00:07:24.572 ************************************ 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:24.572 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9YBwE6MzFr 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72902 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72902 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72902 ']' 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.573 23:02:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.573 [2024-11-18 23:02:43.820755] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:24.573 [2024-11-18 23:02:43.820959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72902 ] 00:07:24.833 [2024-11-18 23:02:43.970916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.833 [2024-11-18 23:02:44.017980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.833 [2024-11-18 23:02:44.060424] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.833 [2024-11-18 23:02:44.060461] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.406 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.406 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:25.406 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:25.406 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:25.406 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.406 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.406 BaseBdev1_malloc 00:07:25.406 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.406 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.407 true 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.407 [2024-11-18 23:02:44.670667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:25.407 [2024-11-18 23:02:44.670723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.407 [2024-11-18 23:02:44.670758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:25.407 [2024-11-18 23:02:44.670766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.407 [2024-11-18 23:02:44.672921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.407 [2024-11-18 23:02:44.673019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:25.407 BaseBdev1 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.407 BaseBdev2_malloc 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.407 true 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.407 [2024-11-18 23:02:44.727752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:25.407 [2024-11-18 23:02:44.727828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.407 [2024-11-18 23:02:44.727851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:25.407 [2024-11-18 23:02:44.727862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.407 [2024-11-18 23:02:44.730559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.407 [2024-11-18 23:02:44.730602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:25.407 BaseBdev2 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.407 [2024-11-18 23:02:44.739687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.407 [2024-11-18 23:02:44.741541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:25.407 [2024-11-18 23:02:44.741694] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:25.407 [2024-11-18 23:02:44.741707] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:25.407 [2024-11-18 23:02:44.741935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:25.407 [2024-11-18 23:02:44.742063] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:25.407 [2024-11-18 23:02:44.742083] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:25.407 [2024-11-18 23:02:44.742202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.407 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.671 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.671 "name": "raid_bdev1", 00:07:25.671 "uuid": "48c07e9d-4146-4967-b0df-cf3163b911c8", 00:07:25.671 "strip_size_kb": 64, 00:07:25.671 "state": "online", 00:07:25.671 "raid_level": "raid0", 00:07:25.671 "superblock": true, 00:07:25.671 "num_base_bdevs": 2, 00:07:25.671 "num_base_bdevs_discovered": 2, 00:07:25.671 "num_base_bdevs_operational": 2, 00:07:25.671 "base_bdevs_list": [ 00:07:25.671 { 00:07:25.671 "name": "BaseBdev1", 00:07:25.671 "uuid": "26047e2c-5cff-51a6-91c2-18befc71dd01", 00:07:25.671 "is_configured": true, 00:07:25.671 "data_offset": 2048, 00:07:25.671 "data_size": 63488 00:07:25.671 }, 00:07:25.671 { 00:07:25.671 "name": "BaseBdev2", 00:07:25.671 "uuid": "68a0ba77-bd7f-58a1-a733-440dcda0bab6", 00:07:25.671 "is_configured": true, 00:07:25.671 "data_offset": 2048, 00:07:25.671 "data_size": 63488 00:07:25.671 } 00:07:25.671 ] 00:07:25.671 }' 00:07:25.671 23:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.671 23:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.931 23:02:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:25.931 23:02:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:25.931 [2024-11-18 23:02:45.299096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.870 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.131 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.131 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.131 "name": "raid_bdev1", 00:07:27.131 "uuid": "48c07e9d-4146-4967-b0df-cf3163b911c8", 00:07:27.131 "strip_size_kb": 64, 00:07:27.131 "state": "online", 00:07:27.131 "raid_level": "raid0", 00:07:27.131 "superblock": true, 00:07:27.131 "num_base_bdevs": 2, 00:07:27.131 "num_base_bdevs_discovered": 2, 00:07:27.131 "num_base_bdevs_operational": 2, 00:07:27.131 "base_bdevs_list": [ 00:07:27.131 { 00:07:27.131 "name": "BaseBdev1", 00:07:27.131 "uuid": "26047e2c-5cff-51a6-91c2-18befc71dd01", 00:07:27.131 "is_configured": true, 00:07:27.131 "data_offset": 2048, 00:07:27.131 "data_size": 63488 00:07:27.131 }, 00:07:27.131 { 00:07:27.131 "name": "BaseBdev2", 00:07:27.131 "uuid": "68a0ba77-bd7f-58a1-a733-440dcda0bab6", 00:07:27.131 "is_configured": true, 00:07:27.131 "data_offset": 2048, 00:07:27.131 "data_size": 63488 00:07:27.131 } 00:07:27.131 ] 00:07:27.131 }' 00:07:27.131 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.131 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.392 [2024-11-18 23:02:46.622239] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.392 [2024-11-18 23:02:46.622344] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.392 [2024-11-18 23:02:46.624952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.392 [2024-11-18 23:02:46.625029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.392 [2024-11-18 23:02:46.625080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.392 [2024-11-18 23:02:46.625128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:27.392 { 00:07:27.392 "results": [ 00:07:27.392 { 00:07:27.392 "job": "raid_bdev1", 00:07:27.392 "core_mask": "0x1", 00:07:27.392 "workload": "randrw", 00:07:27.392 "percentage": 50, 00:07:27.392 "status": "finished", 00:07:27.392 "queue_depth": 1, 00:07:27.392 "io_size": 131072, 00:07:27.392 "runtime": 1.324012, 00:07:27.392 "iops": 18174.306577281775, 00:07:27.392 "mibps": 2271.788322160222, 00:07:27.392 "io_failed": 1, 00:07:27.392 "io_timeout": 0, 00:07:27.392 "avg_latency_us": 76.04761683545479, 00:07:27.392 "min_latency_us": 24.258515283842794, 00:07:27.392 "max_latency_us": 1373.6803493449781 00:07:27.392 } 00:07:27.392 ], 00:07:27.392 "core_count": 1 00:07:27.392 } 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72902 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72902 ']' 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72902 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72902 00:07:27.392 killing process with pid 72902 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72902' 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72902 00:07:27.392 [2024-11-18 23:02:46.671549] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.392 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72902 00:07:27.392 [2024-11-18 23:02:46.686256] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.652 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9YBwE6MzFr 00:07:27.652 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:27.652 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:27.652 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:07:27.652 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:27.652 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.652 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:27.652 23:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:07:27.652 00:07:27.652 real 0m3.205s 00:07:27.652 user 0m4.031s 00:07:27.652 sys 0m0.514s 00:07:27.652 ************************************ 00:07:27.652 END TEST raid_write_error_test 00:07:27.652 ************************************ 00:07:27.652 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.652 23:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.652 23:02:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:27.652 23:02:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:27.652 23:02:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:27.652 23:02:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.652 23:02:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.652 ************************************ 00:07:27.652 START TEST raid_state_function_test 00:07:27.652 ************************************ 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.652 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:27.652 Process raid pid: 73029 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73029 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73029' 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73029 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73029 ']' 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.652 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.653 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.653 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.912 [2024-11-18 23:02:47.091442] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:27.912 [2024-11-18 23:02:47.091641] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.912 [2024-11-18 23:02:47.252455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.172 [2024-11-18 23:02:47.298904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.172 [2024-11-18 23:02:47.341407] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.172 [2024-11-18 23:02:47.341523] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.741 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.742 [2024-11-18 23:02:47.911129] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.742 [2024-11-18 23:02:47.911260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.742 [2024-11-18 23:02:47.911312] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.742 [2024-11-18 23:02:47.911337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.742 "name": "Existed_Raid", 00:07:28.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.742 "strip_size_kb": 64, 00:07:28.742 "state": "configuring", 00:07:28.742 "raid_level": "concat", 00:07:28.742 "superblock": false, 00:07:28.742 "num_base_bdevs": 2, 00:07:28.742 "num_base_bdevs_discovered": 0, 00:07:28.742 "num_base_bdevs_operational": 2, 00:07:28.742 "base_bdevs_list": [ 00:07:28.742 { 00:07:28.742 "name": "BaseBdev1", 00:07:28.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.742 "is_configured": false, 00:07:28.742 "data_offset": 0, 00:07:28.742 "data_size": 0 00:07:28.742 }, 00:07:28.742 { 00:07:28.742 "name": "BaseBdev2", 00:07:28.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.742 "is_configured": false, 00:07:28.742 "data_offset": 0, 00:07:28.742 "data_size": 0 00:07:28.742 } 00:07:28.742 ] 00:07:28.742 }' 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.742 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.002 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.002 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 [2024-11-18 23:02:48.354278] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.002 [2024-11-18 23:02:48.354334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:29.002 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.002 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.002 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.002 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 [2024-11-18 23:02:48.366306] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:29.002 [2024-11-18 23:02:48.366345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:29.002 [2024-11-18 23:02:48.366353] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.002 [2024-11-18 23:02:48.366377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.002 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.002 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:29.002 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.002 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 [2024-11-18 23:02:48.387016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.263 BaseBdev1 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 [ 00:07:29.263 { 00:07:29.263 "name": "BaseBdev1", 00:07:29.263 "aliases": [ 00:07:29.263 "1f9618f7-6748-41dc-b36e-30d20f2a080b" 00:07:29.263 ], 00:07:29.263 "product_name": "Malloc disk", 00:07:29.263 "block_size": 512, 00:07:29.263 "num_blocks": 65536, 00:07:29.263 "uuid": "1f9618f7-6748-41dc-b36e-30d20f2a080b", 00:07:29.263 "assigned_rate_limits": { 00:07:29.263 "rw_ios_per_sec": 0, 00:07:29.263 "rw_mbytes_per_sec": 0, 00:07:29.263 "r_mbytes_per_sec": 0, 00:07:29.263 "w_mbytes_per_sec": 0 00:07:29.263 }, 00:07:29.263 "claimed": true, 00:07:29.263 "claim_type": "exclusive_write", 00:07:29.263 "zoned": false, 00:07:29.263 "supported_io_types": { 00:07:29.263 "read": true, 00:07:29.263 "write": true, 00:07:29.263 "unmap": true, 00:07:29.263 "flush": true, 00:07:29.263 "reset": true, 00:07:29.263 "nvme_admin": false, 00:07:29.263 "nvme_io": false, 00:07:29.263 "nvme_io_md": false, 00:07:29.263 "write_zeroes": true, 00:07:29.263 "zcopy": true, 00:07:29.263 "get_zone_info": false, 00:07:29.263 "zone_management": false, 00:07:29.263 "zone_append": false, 00:07:29.263 "compare": false, 00:07:29.263 "compare_and_write": false, 00:07:29.263 "abort": true, 00:07:29.263 "seek_hole": false, 00:07:29.263 "seek_data": false, 00:07:29.263 "copy": true, 00:07:29.263 "nvme_iov_md": false 00:07:29.263 }, 00:07:29.263 "memory_domains": [ 00:07:29.263 { 00:07:29.263 "dma_device_id": "system", 00:07:29.263 "dma_device_type": 1 00:07:29.263 }, 00:07:29.263 { 00:07:29.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.263 "dma_device_type": 2 00:07:29.263 } 00:07:29.263 ], 00:07:29.263 "driver_specific": {} 00:07:29.263 } 00:07:29.263 ] 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.263 "name": "Existed_Raid", 00:07:29.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.263 "strip_size_kb": 64, 00:07:29.263 "state": "configuring", 00:07:29.263 "raid_level": "concat", 00:07:29.263 "superblock": false, 00:07:29.263 "num_base_bdevs": 2, 00:07:29.263 "num_base_bdevs_discovered": 1, 00:07:29.263 "num_base_bdevs_operational": 2, 00:07:29.263 "base_bdevs_list": [ 00:07:29.263 { 00:07:29.263 "name": "BaseBdev1", 00:07:29.263 "uuid": "1f9618f7-6748-41dc-b36e-30d20f2a080b", 00:07:29.263 "is_configured": true, 00:07:29.263 "data_offset": 0, 00:07:29.263 "data_size": 65536 00:07:29.263 }, 00:07:29.263 { 00:07:29.263 "name": "BaseBdev2", 00:07:29.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.263 "is_configured": false, 00:07:29.263 "data_offset": 0, 00:07:29.263 "data_size": 0 00:07:29.263 } 00:07:29.263 ] 00:07:29.263 }' 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.263 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.523 [2024-11-18 23:02:48.830289] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.523 [2024-11-18 23:02:48.830389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.523 [2024-11-18 23:02:48.838315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.523 [2024-11-18 23:02:48.840153] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.523 [2024-11-18 23:02:48.840226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.523 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.523 "name": "Existed_Raid", 00:07:29.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.523 "strip_size_kb": 64, 00:07:29.523 "state": "configuring", 00:07:29.523 "raid_level": "concat", 00:07:29.523 "superblock": false, 00:07:29.523 "num_base_bdevs": 2, 00:07:29.523 "num_base_bdevs_discovered": 1, 00:07:29.523 "num_base_bdevs_operational": 2, 00:07:29.523 "base_bdevs_list": [ 00:07:29.523 { 00:07:29.523 "name": "BaseBdev1", 00:07:29.523 "uuid": "1f9618f7-6748-41dc-b36e-30d20f2a080b", 00:07:29.523 "is_configured": true, 00:07:29.523 "data_offset": 0, 00:07:29.524 "data_size": 65536 00:07:29.524 }, 00:07:29.524 { 00:07:29.524 "name": "BaseBdev2", 00:07:29.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.524 "is_configured": false, 00:07:29.524 "data_offset": 0, 00:07:29.524 "data_size": 0 00:07:29.524 } 00:07:29.524 ] 00:07:29.524 }' 00:07:29.524 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.524 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.092 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:30.092 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.092 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.093 [2024-11-18 23:02:49.332391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.093 [2024-11-18 23:02:49.332646] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:30.093 [2024-11-18 23:02:49.332689] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:30.093 [2024-11-18 23:02:49.333616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:30.093 [2024-11-18 23:02:49.334047] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:30.093 [2024-11-18 23:02:49.334097] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:30.093 BaseBdev2 00:07:30.093 [2024-11-18 23:02:49.334737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.093 [ 00:07:30.093 { 00:07:30.093 "name": "BaseBdev2", 00:07:30.093 "aliases": [ 00:07:30.093 "e9e39a5d-c427-44d4-b8ce-d59b9771b9d9" 00:07:30.093 ], 00:07:30.093 "product_name": "Malloc disk", 00:07:30.093 "block_size": 512, 00:07:30.093 "num_blocks": 65536, 00:07:30.093 "uuid": "e9e39a5d-c427-44d4-b8ce-d59b9771b9d9", 00:07:30.093 "assigned_rate_limits": { 00:07:30.093 "rw_ios_per_sec": 0, 00:07:30.093 "rw_mbytes_per_sec": 0, 00:07:30.093 "r_mbytes_per_sec": 0, 00:07:30.093 "w_mbytes_per_sec": 0 00:07:30.093 }, 00:07:30.093 "claimed": true, 00:07:30.093 "claim_type": "exclusive_write", 00:07:30.093 "zoned": false, 00:07:30.093 "supported_io_types": { 00:07:30.093 "read": true, 00:07:30.093 "write": true, 00:07:30.093 "unmap": true, 00:07:30.093 "flush": true, 00:07:30.093 "reset": true, 00:07:30.093 "nvme_admin": false, 00:07:30.093 "nvme_io": false, 00:07:30.093 "nvme_io_md": false, 00:07:30.093 "write_zeroes": true, 00:07:30.093 "zcopy": true, 00:07:30.093 "get_zone_info": false, 00:07:30.093 "zone_management": false, 00:07:30.093 "zone_append": false, 00:07:30.093 "compare": false, 00:07:30.093 "compare_and_write": false, 00:07:30.093 "abort": true, 00:07:30.093 "seek_hole": false, 00:07:30.093 "seek_data": false, 00:07:30.093 "copy": true, 00:07:30.093 "nvme_iov_md": false 00:07:30.093 }, 00:07:30.093 "memory_domains": [ 00:07:30.093 { 00:07:30.093 "dma_device_id": "system", 00:07:30.093 "dma_device_type": 1 00:07:30.093 }, 00:07:30.093 { 00:07:30.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.093 "dma_device_type": 2 00:07:30.093 } 00:07:30.093 ], 00:07:30.093 "driver_specific": {} 00:07:30.093 } 00:07:30.093 ] 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.093 "name": "Existed_Raid", 00:07:30.093 "uuid": "3cea64d4-7158-4dfa-a1aa-4d0f12920d49", 00:07:30.093 "strip_size_kb": 64, 00:07:30.093 "state": "online", 00:07:30.093 "raid_level": "concat", 00:07:30.093 "superblock": false, 00:07:30.093 "num_base_bdevs": 2, 00:07:30.093 "num_base_bdevs_discovered": 2, 00:07:30.093 "num_base_bdevs_operational": 2, 00:07:30.093 "base_bdevs_list": [ 00:07:30.093 { 00:07:30.093 "name": "BaseBdev1", 00:07:30.093 "uuid": "1f9618f7-6748-41dc-b36e-30d20f2a080b", 00:07:30.093 "is_configured": true, 00:07:30.093 "data_offset": 0, 00:07:30.093 "data_size": 65536 00:07:30.093 }, 00:07:30.093 { 00:07:30.093 "name": "BaseBdev2", 00:07:30.093 "uuid": "e9e39a5d-c427-44d4-b8ce-d59b9771b9d9", 00:07:30.093 "is_configured": true, 00:07:30.093 "data_offset": 0, 00:07:30.093 "data_size": 65536 00:07:30.093 } 00:07:30.093 ] 00:07:30.093 }' 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.093 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.662 [2024-11-18 23:02:49.819712] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.662 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.662 "name": "Existed_Raid", 00:07:30.662 "aliases": [ 00:07:30.662 "3cea64d4-7158-4dfa-a1aa-4d0f12920d49" 00:07:30.662 ], 00:07:30.662 "product_name": "Raid Volume", 00:07:30.662 "block_size": 512, 00:07:30.662 "num_blocks": 131072, 00:07:30.662 "uuid": "3cea64d4-7158-4dfa-a1aa-4d0f12920d49", 00:07:30.662 "assigned_rate_limits": { 00:07:30.663 "rw_ios_per_sec": 0, 00:07:30.663 "rw_mbytes_per_sec": 0, 00:07:30.663 "r_mbytes_per_sec": 0, 00:07:30.663 "w_mbytes_per_sec": 0 00:07:30.663 }, 00:07:30.663 "claimed": false, 00:07:30.663 "zoned": false, 00:07:30.663 "supported_io_types": { 00:07:30.663 "read": true, 00:07:30.663 "write": true, 00:07:30.663 "unmap": true, 00:07:30.663 "flush": true, 00:07:30.663 "reset": true, 00:07:30.663 "nvme_admin": false, 00:07:30.663 "nvme_io": false, 00:07:30.663 "nvme_io_md": false, 00:07:30.663 "write_zeroes": true, 00:07:30.663 "zcopy": false, 00:07:30.663 "get_zone_info": false, 00:07:30.663 "zone_management": false, 00:07:30.663 "zone_append": false, 00:07:30.663 "compare": false, 00:07:30.663 "compare_and_write": false, 00:07:30.663 "abort": false, 00:07:30.663 "seek_hole": false, 00:07:30.663 "seek_data": false, 00:07:30.663 "copy": false, 00:07:30.663 "nvme_iov_md": false 00:07:30.663 }, 00:07:30.663 "memory_domains": [ 00:07:30.663 { 00:07:30.663 "dma_device_id": "system", 00:07:30.663 "dma_device_type": 1 00:07:30.663 }, 00:07:30.663 { 00:07:30.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.663 "dma_device_type": 2 00:07:30.663 }, 00:07:30.663 { 00:07:30.663 "dma_device_id": "system", 00:07:30.663 "dma_device_type": 1 00:07:30.663 }, 00:07:30.663 { 00:07:30.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.663 "dma_device_type": 2 00:07:30.663 } 00:07:30.663 ], 00:07:30.663 "driver_specific": { 00:07:30.663 "raid": { 00:07:30.663 "uuid": "3cea64d4-7158-4dfa-a1aa-4d0f12920d49", 00:07:30.663 "strip_size_kb": 64, 00:07:30.663 "state": "online", 00:07:30.663 "raid_level": "concat", 00:07:30.663 "superblock": false, 00:07:30.663 "num_base_bdevs": 2, 00:07:30.663 "num_base_bdevs_discovered": 2, 00:07:30.663 "num_base_bdevs_operational": 2, 00:07:30.663 "base_bdevs_list": [ 00:07:30.663 { 00:07:30.663 "name": "BaseBdev1", 00:07:30.663 "uuid": "1f9618f7-6748-41dc-b36e-30d20f2a080b", 00:07:30.663 "is_configured": true, 00:07:30.663 "data_offset": 0, 00:07:30.663 "data_size": 65536 00:07:30.663 }, 00:07:30.663 { 00:07:30.663 "name": "BaseBdev2", 00:07:30.663 "uuid": "e9e39a5d-c427-44d4-b8ce-d59b9771b9d9", 00:07:30.663 "is_configured": true, 00:07:30.663 "data_offset": 0, 00:07:30.663 "data_size": 65536 00:07:30.663 } 00:07:30.663 ] 00:07:30.663 } 00:07:30.663 } 00:07:30.663 }' 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:30.663 BaseBdev2' 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.663 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.663 [2024-11-18 23:02:50.011202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:30.663 [2024-11-18 23:02:50.011269] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.663 [2024-11-18 23:02:50.011331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.663 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.922 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.922 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.922 "name": "Existed_Raid", 00:07:30.922 "uuid": "3cea64d4-7158-4dfa-a1aa-4d0f12920d49", 00:07:30.922 "strip_size_kb": 64, 00:07:30.922 "state": "offline", 00:07:30.922 "raid_level": "concat", 00:07:30.922 "superblock": false, 00:07:30.922 "num_base_bdevs": 2, 00:07:30.922 "num_base_bdevs_discovered": 1, 00:07:30.922 "num_base_bdevs_operational": 1, 00:07:30.922 "base_bdevs_list": [ 00:07:30.922 { 00:07:30.922 "name": null, 00:07:30.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.922 "is_configured": false, 00:07:30.922 "data_offset": 0, 00:07:30.922 "data_size": 65536 00:07:30.922 }, 00:07:30.922 { 00:07:30.922 "name": "BaseBdev2", 00:07:30.922 "uuid": "e9e39a5d-c427-44d4-b8ce-d59b9771b9d9", 00:07:30.922 "is_configured": true, 00:07:30.922 "data_offset": 0, 00:07:30.922 "data_size": 65536 00:07:30.922 } 00:07:30.922 ] 00:07:30.922 }' 00:07:30.922 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.922 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.182 [2024-11-18 23:02:50.521452] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:31.182 [2024-11-18 23:02:50.521548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.182 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.183 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:31.183 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.183 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73029 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73029 ']' 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73029 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73029 00:07:31.443 killing process with pid 73029 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73029' 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73029 00:07:31.443 [2024-11-18 23:02:50.623642] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.443 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73029 00:07:31.443 [2024-11-18 23:02:50.624625] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:31.703 00:07:31.703 real 0m3.860s 00:07:31.703 user 0m6.082s 00:07:31.703 sys 0m0.768s 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.703 ************************************ 00:07:31.703 END TEST raid_state_function_test 00:07:31.703 ************************************ 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.703 23:02:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:31.703 23:02:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:31.703 23:02:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.703 23:02:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.703 ************************************ 00:07:31.703 START TEST raid_state_function_test_sb 00:07:31.703 ************************************ 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73271 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73271' 00:07:31.703 Process raid pid: 73271 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73271 00:07:31.703 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73271 ']' 00:07:31.704 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.704 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.704 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.704 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.704 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.704 [2024-11-18 23:02:51.011517] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:31.704 [2024-11-18 23:02:51.011729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.965 [2024-11-18 23:02:51.163406] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.965 [2024-11-18 23:02:51.209682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.965 [2024-11-18 23:02:51.252453] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.965 [2024-11-18 23:02:51.252576] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.535 [2024-11-18 23:02:51.833939] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.535 [2024-11-18 23:02:51.834045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.535 [2024-11-18 23:02:51.834090] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.535 [2024-11-18 23:02:51.834101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.535 "name": "Existed_Raid", 00:07:32.535 "uuid": "f0198f56-3245-49d3-8535-06980aeb4e09", 00:07:32.535 "strip_size_kb": 64, 00:07:32.535 "state": "configuring", 00:07:32.535 "raid_level": "concat", 00:07:32.535 "superblock": true, 00:07:32.535 "num_base_bdevs": 2, 00:07:32.535 "num_base_bdevs_discovered": 0, 00:07:32.535 "num_base_bdevs_operational": 2, 00:07:32.535 "base_bdevs_list": [ 00:07:32.535 { 00:07:32.535 "name": "BaseBdev1", 00:07:32.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.535 "is_configured": false, 00:07:32.535 "data_offset": 0, 00:07:32.535 "data_size": 0 00:07:32.535 }, 00:07:32.535 { 00:07:32.535 "name": "BaseBdev2", 00:07:32.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.535 "is_configured": false, 00:07:32.535 "data_offset": 0, 00:07:32.535 "data_size": 0 00:07:32.535 } 00:07:32.535 ] 00:07:32.535 }' 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.535 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.108 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.108 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.109 [2024-11-18 23:02:52.225156] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.109 [2024-11-18 23:02:52.225240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.109 [2024-11-18 23:02:52.233189] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:33.109 [2024-11-18 23:02:52.233266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:33.109 [2024-11-18 23:02:52.233305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.109 [2024-11-18 23:02:52.233328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.109 [2024-11-18 23:02:52.250063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.109 BaseBdev1 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.109 [ 00:07:33.109 { 00:07:33.109 "name": "BaseBdev1", 00:07:33.109 "aliases": [ 00:07:33.109 "0f2f1a8b-eecf-4b7f-ad36-e5d5c9606f39" 00:07:33.109 ], 00:07:33.109 "product_name": "Malloc disk", 00:07:33.109 "block_size": 512, 00:07:33.109 "num_blocks": 65536, 00:07:33.109 "uuid": "0f2f1a8b-eecf-4b7f-ad36-e5d5c9606f39", 00:07:33.109 "assigned_rate_limits": { 00:07:33.109 "rw_ios_per_sec": 0, 00:07:33.109 "rw_mbytes_per_sec": 0, 00:07:33.109 "r_mbytes_per_sec": 0, 00:07:33.109 "w_mbytes_per_sec": 0 00:07:33.109 }, 00:07:33.109 "claimed": true, 00:07:33.109 "claim_type": "exclusive_write", 00:07:33.109 "zoned": false, 00:07:33.109 "supported_io_types": { 00:07:33.109 "read": true, 00:07:33.109 "write": true, 00:07:33.109 "unmap": true, 00:07:33.109 "flush": true, 00:07:33.109 "reset": true, 00:07:33.109 "nvme_admin": false, 00:07:33.109 "nvme_io": false, 00:07:33.109 "nvme_io_md": false, 00:07:33.109 "write_zeroes": true, 00:07:33.109 "zcopy": true, 00:07:33.109 "get_zone_info": false, 00:07:33.109 "zone_management": false, 00:07:33.109 "zone_append": false, 00:07:33.109 "compare": false, 00:07:33.109 "compare_and_write": false, 00:07:33.109 "abort": true, 00:07:33.109 "seek_hole": false, 00:07:33.109 "seek_data": false, 00:07:33.109 "copy": true, 00:07:33.109 "nvme_iov_md": false 00:07:33.109 }, 00:07:33.109 "memory_domains": [ 00:07:33.109 { 00:07:33.109 "dma_device_id": "system", 00:07:33.109 "dma_device_type": 1 00:07:33.109 }, 00:07:33.109 { 00:07:33.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.109 "dma_device_type": 2 00:07:33.109 } 00:07:33.109 ], 00:07:33.109 "driver_specific": {} 00:07:33.109 } 00:07:33.109 ] 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.109 "name": "Existed_Raid", 00:07:33.109 "uuid": "5f8c8c52-b7b8-427c-8466-0c9e6afde7bc", 00:07:33.109 "strip_size_kb": 64, 00:07:33.109 "state": "configuring", 00:07:33.109 "raid_level": "concat", 00:07:33.109 "superblock": true, 00:07:33.109 "num_base_bdevs": 2, 00:07:33.109 "num_base_bdevs_discovered": 1, 00:07:33.109 "num_base_bdevs_operational": 2, 00:07:33.109 "base_bdevs_list": [ 00:07:33.109 { 00:07:33.109 "name": "BaseBdev1", 00:07:33.109 "uuid": "0f2f1a8b-eecf-4b7f-ad36-e5d5c9606f39", 00:07:33.109 "is_configured": true, 00:07:33.109 "data_offset": 2048, 00:07:33.109 "data_size": 63488 00:07:33.109 }, 00:07:33.109 { 00:07:33.109 "name": "BaseBdev2", 00:07:33.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.109 "is_configured": false, 00:07:33.109 "data_offset": 0, 00:07:33.109 "data_size": 0 00:07:33.109 } 00:07:33.109 ] 00:07:33.109 }' 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.109 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.370 [2024-11-18 23:02:52.697326] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.370 [2024-11-18 23:02:52.697366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.370 [2024-11-18 23:02:52.709355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.370 [2024-11-18 23:02:52.711159] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.370 [2024-11-18 23:02:52.711246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.370 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.630 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.630 "name": "Existed_Raid", 00:07:33.630 "uuid": "9f406631-cbe4-45ba-b878-39014dd7105b", 00:07:33.630 "strip_size_kb": 64, 00:07:33.630 "state": "configuring", 00:07:33.630 "raid_level": "concat", 00:07:33.630 "superblock": true, 00:07:33.630 "num_base_bdevs": 2, 00:07:33.630 "num_base_bdevs_discovered": 1, 00:07:33.630 "num_base_bdevs_operational": 2, 00:07:33.630 "base_bdevs_list": [ 00:07:33.630 { 00:07:33.630 "name": "BaseBdev1", 00:07:33.630 "uuid": "0f2f1a8b-eecf-4b7f-ad36-e5d5c9606f39", 00:07:33.630 "is_configured": true, 00:07:33.630 "data_offset": 2048, 00:07:33.630 "data_size": 63488 00:07:33.630 }, 00:07:33.630 { 00:07:33.630 "name": "BaseBdev2", 00:07:33.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.630 "is_configured": false, 00:07:33.630 "data_offset": 0, 00:07:33.630 "data_size": 0 00:07:33.630 } 00:07:33.630 ] 00:07:33.630 }' 00:07:33.630 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.630 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.890 [2024-11-18 23:02:53.123114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.890 [2024-11-18 23:02:53.123502] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:33.890 [2024-11-18 23:02:53.123533] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.890 BaseBdev2 00:07:33.890 [2024-11-18 23:02:53.124082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:33.890 [2024-11-18 23:02:53.124333] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:33.890 [2024-11-18 23:02:53.124365] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:33.890 [2024-11-18 23:02:53.124601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.890 [ 00:07:33.890 { 00:07:33.890 "name": "BaseBdev2", 00:07:33.890 "aliases": [ 00:07:33.890 "da57371a-e4bd-4e3f-a648-e3c628e848e6" 00:07:33.890 ], 00:07:33.890 "product_name": "Malloc disk", 00:07:33.890 "block_size": 512, 00:07:33.890 "num_blocks": 65536, 00:07:33.890 "uuid": "da57371a-e4bd-4e3f-a648-e3c628e848e6", 00:07:33.890 "assigned_rate_limits": { 00:07:33.890 "rw_ios_per_sec": 0, 00:07:33.890 "rw_mbytes_per_sec": 0, 00:07:33.890 "r_mbytes_per_sec": 0, 00:07:33.890 "w_mbytes_per_sec": 0 00:07:33.890 }, 00:07:33.890 "claimed": true, 00:07:33.890 "claim_type": "exclusive_write", 00:07:33.890 "zoned": false, 00:07:33.890 "supported_io_types": { 00:07:33.890 "read": true, 00:07:33.890 "write": true, 00:07:33.890 "unmap": true, 00:07:33.890 "flush": true, 00:07:33.890 "reset": true, 00:07:33.890 "nvme_admin": false, 00:07:33.890 "nvme_io": false, 00:07:33.890 "nvme_io_md": false, 00:07:33.890 "write_zeroes": true, 00:07:33.890 "zcopy": true, 00:07:33.890 "get_zone_info": false, 00:07:33.890 "zone_management": false, 00:07:33.890 "zone_append": false, 00:07:33.890 "compare": false, 00:07:33.890 "compare_and_write": false, 00:07:33.890 "abort": true, 00:07:33.890 "seek_hole": false, 00:07:33.890 "seek_data": false, 00:07:33.890 "copy": true, 00:07:33.890 "nvme_iov_md": false 00:07:33.890 }, 00:07:33.890 "memory_domains": [ 00:07:33.890 { 00:07:33.890 "dma_device_id": "system", 00:07:33.890 "dma_device_type": 1 00:07:33.890 }, 00:07:33.890 { 00:07:33.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.890 "dma_device_type": 2 00:07:33.890 } 00:07:33.890 ], 00:07:33.890 "driver_specific": {} 00:07:33.890 } 00:07:33.890 ] 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:33.890 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.891 "name": "Existed_Raid", 00:07:33.891 "uuid": "9f406631-cbe4-45ba-b878-39014dd7105b", 00:07:33.891 "strip_size_kb": 64, 00:07:33.891 "state": "online", 00:07:33.891 "raid_level": "concat", 00:07:33.891 "superblock": true, 00:07:33.891 "num_base_bdevs": 2, 00:07:33.891 "num_base_bdevs_discovered": 2, 00:07:33.891 "num_base_bdevs_operational": 2, 00:07:33.891 "base_bdevs_list": [ 00:07:33.891 { 00:07:33.891 "name": "BaseBdev1", 00:07:33.891 "uuid": "0f2f1a8b-eecf-4b7f-ad36-e5d5c9606f39", 00:07:33.891 "is_configured": true, 00:07:33.891 "data_offset": 2048, 00:07:33.891 "data_size": 63488 00:07:33.891 }, 00:07:33.891 { 00:07:33.891 "name": "BaseBdev2", 00:07:33.891 "uuid": "da57371a-e4bd-4e3f-a648-e3c628e848e6", 00:07:33.891 "is_configured": true, 00:07:33.891 "data_offset": 2048, 00:07:33.891 "data_size": 63488 00:07:33.891 } 00:07:33.891 ] 00:07:33.891 }' 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.891 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.460 [2024-11-18 23:02:53.602569] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.460 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.460 "name": "Existed_Raid", 00:07:34.460 "aliases": [ 00:07:34.460 "9f406631-cbe4-45ba-b878-39014dd7105b" 00:07:34.460 ], 00:07:34.460 "product_name": "Raid Volume", 00:07:34.460 "block_size": 512, 00:07:34.460 "num_blocks": 126976, 00:07:34.460 "uuid": "9f406631-cbe4-45ba-b878-39014dd7105b", 00:07:34.460 "assigned_rate_limits": { 00:07:34.460 "rw_ios_per_sec": 0, 00:07:34.460 "rw_mbytes_per_sec": 0, 00:07:34.460 "r_mbytes_per_sec": 0, 00:07:34.460 "w_mbytes_per_sec": 0 00:07:34.460 }, 00:07:34.460 "claimed": false, 00:07:34.460 "zoned": false, 00:07:34.460 "supported_io_types": { 00:07:34.460 "read": true, 00:07:34.460 "write": true, 00:07:34.460 "unmap": true, 00:07:34.460 "flush": true, 00:07:34.460 "reset": true, 00:07:34.460 "nvme_admin": false, 00:07:34.460 "nvme_io": false, 00:07:34.460 "nvme_io_md": false, 00:07:34.460 "write_zeroes": true, 00:07:34.460 "zcopy": false, 00:07:34.460 "get_zone_info": false, 00:07:34.460 "zone_management": false, 00:07:34.460 "zone_append": false, 00:07:34.460 "compare": false, 00:07:34.460 "compare_and_write": false, 00:07:34.460 "abort": false, 00:07:34.460 "seek_hole": false, 00:07:34.460 "seek_data": false, 00:07:34.460 "copy": false, 00:07:34.460 "nvme_iov_md": false 00:07:34.460 }, 00:07:34.460 "memory_domains": [ 00:07:34.460 { 00:07:34.460 "dma_device_id": "system", 00:07:34.460 "dma_device_type": 1 00:07:34.460 }, 00:07:34.460 { 00:07:34.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.461 "dma_device_type": 2 00:07:34.461 }, 00:07:34.461 { 00:07:34.461 "dma_device_id": "system", 00:07:34.461 "dma_device_type": 1 00:07:34.461 }, 00:07:34.461 { 00:07:34.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.461 "dma_device_type": 2 00:07:34.461 } 00:07:34.461 ], 00:07:34.461 "driver_specific": { 00:07:34.461 "raid": { 00:07:34.461 "uuid": "9f406631-cbe4-45ba-b878-39014dd7105b", 00:07:34.461 "strip_size_kb": 64, 00:07:34.461 "state": "online", 00:07:34.461 "raid_level": "concat", 00:07:34.461 "superblock": true, 00:07:34.461 "num_base_bdevs": 2, 00:07:34.461 "num_base_bdevs_discovered": 2, 00:07:34.461 "num_base_bdevs_operational": 2, 00:07:34.461 "base_bdevs_list": [ 00:07:34.461 { 00:07:34.461 "name": "BaseBdev1", 00:07:34.461 "uuid": "0f2f1a8b-eecf-4b7f-ad36-e5d5c9606f39", 00:07:34.461 "is_configured": true, 00:07:34.461 "data_offset": 2048, 00:07:34.461 "data_size": 63488 00:07:34.461 }, 00:07:34.461 { 00:07:34.461 "name": "BaseBdev2", 00:07:34.461 "uuid": "da57371a-e4bd-4e3f-a648-e3c628e848e6", 00:07:34.461 "is_configured": true, 00:07:34.461 "data_offset": 2048, 00:07:34.461 "data_size": 63488 00:07:34.461 } 00:07:34.461 ] 00:07:34.461 } 00:07:34.461 } 00:07:34.461 }' 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:34.461 BaseBdev2' 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.461 [2024-11-18 23:02:53.742087] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.461 [2024-11-18 23:02:53.742113] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.461 [2024-11-18 23:02:53.742170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.461 "name": "Existed_Raid", 00:07:34.461 "uuid": "9f406631-cbe4-45ba-b878-39014dd7105b", 00:07:34.461 "strip_size_kb": 64, 00:07:34.461 "state": "offline", 00:07:34.461 "raid_level": "concat", 00:07:34.461 "superblock": true, 00:07:34.461 "num_base_bdevs": 2, 00:07:34.461 "num_base_bdevs_discovered": 1, 00:07:34.461 "num_base_bdevs_operational": 1, 00:07:34.461 "base_bdevs_list": [ 00:07:34.461 { 00:07:34.461 "name": null, 00:07:34.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.461 "is_configured": false, 00:07:34.461 "data_offset": 0, 00:07:34.461 "data_size": 63488 00:07:34.461 }, 00:07:34.461 { 00:07:34.461 "name": "BaseBdev2", 00:07:34.461 "uuid": "da57371a-e4bd-4e3f-a648-e3c628e848e6", 00:07:34.461 "is_configured": true, 00:07:34.461 "data_offset": 2048, 00:07:34.461 "data_size": 63488 00:07:34.461 } 00:07:34.461 ] 00:07:34.461 }' 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.461 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.031 [2024-11-18 23:02:54.244406] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:35.031 [2024-11-18 23:02:54.244504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73271 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73271 ']' 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73271 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73271 00:07:35.031 killing process with pid 73271 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73271' 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73271 00:07:35.031 [2024-11-18 23:02:54.352573] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.031 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73271 00:07:35.031 [2024-11-18 23:02:54.353530] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.291 ************************************ 00:07:35.291 END TEST raid_state_function_test_sb 00:07:35.291 ************************************ 00:07:35.291 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:35.291 00:07:35.291 real 0m3.664s 00:07:35.291 user 0m5.707s 00:07:35.291 sys 0m0.730s 00:07:35.291 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.291 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.291 23:02:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:35.291 23:02:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:35.291 23:02:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.291 23:02:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.291 ************************************ 00:07:35.291 START TEST raid_superblock_test 00:07:35.291 ************************************ 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73512 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73512 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73512 ']' 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.291 23:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.292 23:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.292 23:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.292 23:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.551 [2024-11-18 23:02:54.737112] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:35.551 [2024-11-18 23:02:54.737324] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73512 ] 00:07:35.551 [2024-11-18 23:02:54.896679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.811 [2024-11-18 23:02:54.941788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.811 [2024-11-18 23:02:54.983563] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.811 [2024-11-18 23:02:54.983682] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.380 malloc1 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.380 [2024-11-18 23:02:55.565854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:36.380 [2024-11-18 23:02:55.565940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.380 [2024-11-18 23:02:55.565961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:36.380 [2024-11-18 23:02:55.565975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.380 [2024-11-18 23:02:55.568026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.380 [2024-11-18 23:02:55.568066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:36.380 pt1 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.380 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.380 malloc2 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.381 [2024-11-18 23:02:55.606151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:36.381 [2024-11-18 23:02:55.606250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.381 [2024-11-18 23:02:55.606272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:36.381 [2024-11-18 23:02:55.606306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.381 [2024-11-18 23:02:55.608552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.381 [2024-11-18 23:02:55.608577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:36.381 pt2 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.381 [2024-11-18 23:02:55.618180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:36.381 [2024-11-18 23:02:55.620060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:36.381 [2024-11-18 23:02:55.620230] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:36.381 [2024-11-18 23:02:55.620294] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:36.381 [2024-11-18 23:02:55.620559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:36.381 [2024-11-18 23:02:55.620737] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:36.381 [2024-11-18 23:02:55.620776] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:36.381 [2024-11-18 23:02:55.620934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.381 "name": "raid_bdev1", 00:07:36.381 "uuid": "45608547-08cb-4335-bed4-1db91f6844f5", 00:07:36.381 "strip_size_kb": 64, 00:07:36.381 "state": "online", 00:07:36.381 "raid_level": "concat", 00:07:36.381 "superblock": true, 00:07:36.381 "num_base_bdevs": 2, 00:07:36.381 "num_base_bdevs_discovered": 2, 00:07:36.381 "num_base_bdevs_operational": 2, 00:07:36.381 "base_bdevs_list": [ 00:07:36.381 { 00:07:36.381 "name": "pt1", 00:07:36.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.381 "is_configured": true, 00:07:36.381 "data_offset": 2048, 00:07:36.381 "data_size": 63488 00:07:36.381 }, 00:07:36.381 { 00:07:36.381 "name": "pt2", 00:07:36.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.381 "is_configured": true, 00:07:36.381 "data_offset": 2048, 00:07:36.381 "data_size": 63488 00:07:36.381 } 00:07:36.381 ] 00:07:36.381 }' 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.381 23:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.950 [2024-11-18 23:02:56.065647] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.950 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:36.950 "name": "raid_bdev1", 00:07:36.950 "aliases": [ 00:07:36.950 "45608547-08cb-4335-bed4-1db91f6844f5" 00:07:36.950 ], 00:07:36.950 "product_name": "Raid Volume", 00:07:36.950 "block_size": 512, 00:07:36.950 "num_blocks": 126976, 00:07:36.950 "uuid": "45608547-08cb-4335-bed4-1db91f6844f5", 00:07:36.950 "assigned_rate_limits": { 00:07:36.950 "rw_ios_per_sec": 0, 00:07:36.950 "rw_mbytes_per_sec": 0, 00:07:36.950 "r_mbytes_per_sec": 0, 00:07:36.950 "w_mbytes_per_sec": 0 00:07:36.950 }, 00:07:36.950 "claimed": false, 00:07:36.950 "zoned": false, 00:07:36.950 "supported_io_types": { 00:07:36.950 "read": true, 00:07:36.950 "write": true, 00:07:36.950 "unmap": true, 00:07:36.950 "flush": true, 00:07:36.950 "reset": true, 00:07:36.950 "nvme_admin": false, 00:07:36.950 "nvme_io": false, 00:07:36.950 "nvme_io_md": false, 00:07:36.950 "write_zeroes": true, 00:07:36.950 "zcopy": false, 00:07:36.950 "get_zone_info": false, 00:07:36.950 "zone_management": false, 00:07:36.950 "zone_append": false, 00:07:36.950 "compare": false, 00:07:36.950 "compare_and_write": false, 00:07:36.950 "abort": false, 00:07:36.950 "seek_hole": false, 00:07:36.950 "seek_data": false, 00:07:36.950 "copy": false, 00:07:36.950 "nvme_iov_md": false 00:07:36.950 }, 00:07:36.950 "memory_domains": [ 00:07:36.950 { 00:07:36.950 "dma_device_id": "system", 00:07:36.951 "dma_device_type": 1 00:07:36.951 }, 00:07:36.951 { 00:07:36.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.951 "dma_device_type": 2 00:07:36.951 }, 00:07:36.951 { 00:07:36.951 "dma_device_id": "system", 00:07:36.951 "dma_device_type": 1 00:07:36.951 }, 00:07:36.951 { 00:07:36.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.951 "dma_device_type": 2 00:07:36.951 } 00:07:36.951 ], 00:07:36.951 "driver_specific": { 00:07:36.951 "raid": { 00:07:36.951 "uuid": "45608547-08cb-4335-bed4-1db91f6844f5", 00:07:36.951 "strip_size_kb": 64, 00:07:36.951 "state": "online", 00:07:36.951 "raid_level": "concat", 00:07:36.951 "superblock": true, 00:07:36.951 "num_base_bdevs": 2, 00:07:36.951 "num_base_bdevs_discovered": 2, 00:07:36.951 "num_base_bdevs_operational": 2, 00:07:36.951 "base_bdevs_list": [ 00:07:36.951 { 00:07:36.951 "name": "pt1", 00:07:36.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.951 "is_configured": true, 00:07:36.951 "data_offset": 2048, 00:07:36.951 "data_size": 63488 00:07:36.951 }, 00:07:36.951 { 00:07:36.951 "name": "pt2", 00:07:36.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.951 "is_configured": true, 00:07:36.951 "data_offset": 2048, 00:07:36.951 "data_size": 63488 00:07:36.951 } 00:07:36.951 ] 00:07:36.951 } 00:07:36.951 } 00:07:36.951 }' 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:36.951 pt2' 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.951 [2024-11-18 23:02:56.281239] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=45608547-08cb-4335-bed4-1db91f6844f5 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 45608547-08cb-4335-bed4-1db91f6844f5 ']' 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.951 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.951 [2024-11-18 23:02:56.324906] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:36.951 [2024-11-18 23:02:56.324975] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:36.951 [2024-11-18 23:02:56.325073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:36.951 [2024-11-18 23:02:56.325151] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:36.951 [2024-11-18 23:02:56.325217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.211 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.212 [2024-11-18 23:02:56.456709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:37.212 [2024-11-18 23:02:56.458498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:37.212 [2024-11-18 23:02:56.458617] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:37.212 [2024-11-18 23:02:56.458669] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:37.212 [2024-11-18 23:02:56.458685] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.212 [2024-11-18 23:02:56.458694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:37.212 request: 00:07:37.212 { 00:07:37.212 "name": "raid_bdev1", 00:07:37.212 "raid_level": "concat", 00:07:37.212 "base_bdevs": [ 00:07:37.212 "malloc1", 00:07:37.212 "malloc2" 00:07:37.212 ], 00:07:37.212 "strip_size_kb": 64, 00:07:37.212 "superblock": false, 00:07:37.212 "method": "bdev_raid_create", 00:07:37.212 "req_id": 1 00:07:37.212 } 00:07:37.212 Got JSON-RPC error response 00:07:37.212 response: 00:07:37.212 { 00:07:37.212 "code": -17, 00:07:37.212 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:37.212 } 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.212 [2024-11-18 23:02:56.520577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:37.212 [2024-11-18 23:02:56.520669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.212 [2024-11-18 23:02:56.520703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:37.212 [2024-11-18 23:02:56.520729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.212 [2024-11-18 23:02:56.522818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.212 [2024-11-18 23:02:56.522882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:37.212 [2024-11-18 23:02:56.522966] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:37.212 [2024-11-18 23:02:56.523022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:37.212 pt1 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.212 "name": "raid_bdev1", 00:07:37.212 "uuid": "45608547-08cb-4335-bed4-1db91f6844f5", 00:07:37.212 "strip_size_kb": 64, 00:07:37.212 "state": "configuring", 00:07:37.212 "raid_level": "concat", 00:07:37.212 "superblock": true, 00:07:37.212 "num_base_bdevs": 2, 00:07:37.212 "num_base_bdevs_discovered": 1, 00:07:37.212 "num_base_bdevs_operational": 2, 00:07:37.212 "base_bdevs_list": [ 00:07:37.212 { 00:07:37.212 "name": "pt1", 00:07:37.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.212 "is_configured": true, 00:07:37.212 "data_offset": 2048, 00:07:37.212 "data_size": 63488 00:07:37.212 }, 00:07:37.212 { 00:07:37.212 "name": null, 00:07:37.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.212 "is_configured": false, 00:07:37.212 "data_offset": 2048, 00:07:37.212 "data_size": 63488 00:07:37.212 } 00:07:37.212 ] 00:07:37.212 }' 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.212 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.782 [2024-11-18 23:02:56.923873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:37.782 [2024-11-18 23:02:56.923932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.782 [2024-11-18 23:02:56.923953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:37.782 [2024-11-18 23:02:56.923961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.782 [2024-11-18 23:02:56.924357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.782 [2024-11-18 23:02:56.924374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:37.782 [2024-11-18 23:02:56.924456] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:37.782 [2024-11-18 23:02:56.924481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:37.782 [2024-11-18 23:02:56.924574] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:37.782 [2024-11-18 23:02:56.924585] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.782 [2024-11-18 23:02:56.924808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:37.782 [2024-11-18 23:02:56.924915] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:37.782 [2024-11-18 23:02:56.924929] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:37.782 [2024-11-18 23:02:56.925021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.782 pt2 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.782 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.782 "name": "raid_bdev1", 00:07:37.782 "uuid": "45608547-08cb-4335-bed4-1db91f6844f5", 00:07:37.782 "strip_size_kb": 64, 00:07:37.782 "state": "online", 00:07:37.782 "raid_level": "concat", 00:07:37.782 "superblock": true, 00:07:37.782 "num_base_bdevs": 2, 00:07:37.782 "num_base_bdevs_discovered": 2, 00:07:37.782 "num_base_bdevs_operational": 2, 00:07:37.783 "base_bdevs_list": [ 00:07:37.783 { 00:07:37.783 "name": "pt1", 00:07:37.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.783 "is_configured": true, 00:07:37.783 "data_offset": 2048, 00:07:37.783 "data_size": 63488 00:07:37.783 }, 00:07:37.783 { 00:07:37.783 "name": "pt2", 00:07:37.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.783 "is_configured": true, 00:07:37.783 "data_offset": 2048, 00:07:37.783 "data_size": 63488 00:07:37.783 } 00:07:37.783 ] 00:07:37.783 }' 00:07:37.783 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.783 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.043 [2024-11-18 23:02:57.331482] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.043 "name": "raid_bdev1", 00:07:38.043 "aliases": [ 00:07:38.043 "45608547-08cb-4335-bed4-1db91f6844f5" 00:07:38.043 ], 00:07:38.043 "product_name": "Raid Volume", 00:07:38.043 "block_size": 512, 00:07:38.043 "num_blocks": 126976, 00:07:38.043 "uuid": "45608547-08cb-4335-bed4-1db91f6844f5", 00:07:38.043 "assigned_rate_limits": { 00:07:38.043 "rw_ios_per_sec": 0, 00:07:38.043 "rw_mbytes_per_sec": 0, 00:07:38.043 "r_mbytes_per_sec": 0, 00:07:38.043 "w_mbytes_per_sec": 0 00:07:38.043 }, 00:07:38.043 "claimed": false, 00:07:38.043 "zoned": false, 00:07:38.043 "supported_io_types": { 00:07:38.043 "read": true, 00:07:38.043 "write": true, 00:07:38.043 "unmap": true, 00:07:38.043 "flush": true, 00:07:38.043 "reset": true, 00:07:38.043 "nvme_admin": false, 00:07:38.043 "nvme_io": false, 00:07:38.043 "nvme_io_md": false, 00:07:38.043 "write_zeroes": true, 00:07:38.043 "zcopy": false, 00:07:38.043 "get_zone_info": false, 00:07:38.043 "zone_management": false, 00:07:38.043 "zone_append": false, 00:07:38.043 "compare": false, 00:07:38.043 "compare_and_write": false, 00:07:38.043 "abort": false, 00:07:38.043 "seek_hole": false, 00:07:38.043 "seek_data": false, 00:07:38.043 "copy": false, 00:07:38.043 "nvme_iov_md": false 00:07:38.043 }, 00:07:38.043 "memory_domains": [ 00:07:38.043 { 00:07:38.043 "dma_device_id": "system", 00:07:38.043 "dma_device_type": 1 00:07:38.043 }, 00:07:38.043 { 00:07:38.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.043 "dma_device_type": 2 00:07:38.043 }, 00:07:38.043 { 00:07:38.043 "dma_device_id": "system", 00:07:38.043 "dma_device_type": 1 00:07:38.043 }, 00:07:38.043 { 00:07:38.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.043 "dma_device_type": 2 00:07:38.043 } 00:07:38.043 ], 00:07:38.043 "driver_specific": { 00:07:38.043 "raid": { 00:07:38.043 "uuid": "45608547-08cb-4335-bed4-1db91f6844f5", 00:07:38.043 "strip_size_kb": 64, 00:07:38.043 "state": "online", 00:07:38.043 "raid_level": "concat", 00:07:38.043 "superblock": true, 00:07:38.043 "num_base_bdevs": 2, 00:07:38.043 "num_base_bdevs_discovered": 2, 00:07:38.043 "num_base_bdevs_operational": 2, 00:07:38.043 "base_bdevs_list": [ 00:07:38.043 { 00:07:38.043 "name": "pt1", 00:07:38.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.043 "is_configured": true, 00:07:38.043 "data_offset": 2048, 00:07:38.043 "data_size": 63488 00:07:38.043 }, 00:07:38.043 { 00:07:38.043 "name": "pt2", 00:07:38.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.043 "is_configured": true, 00:07:38.043 "data_offset": 2048, 00:07:38.043 "data_size": 63488 00:07:38.043 } 00:07:38.043 ] 00:07:38.043 } 00:07:38.043 } 00:07:38.043 }' 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:38.043 pt2' 00:07:38.043 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.304 [2024-11-18 23:02:57.511110] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 45608547-08cb-4335-bed4-1db91f6844f5 '!=' 45608547-08cb-4335-bed4-1db91f6844f5 ']' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73512 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73512 ']' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73512 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73512 00:07:38.304 killing process with pid 73512 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73512' 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73512 00:07:38.304 [2024-11-18 23:02:57.591119] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.304 [2024-11-18 23:02:57.591194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.304 [2024-11-18 23:02:57.591241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.304 [2024-11-18 23:02:57.591249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:38.304 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73512 00:07:38.304 [2024-11-18 23:02:57.613263] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.564 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:38.564 ************************************ 00:07:38.564 END TEST raid_superblock_test 00:07:38.564 ************************************ 00:07:38.564 00:07:38.564 real 0m3.194s 00:07:38.564 user 0m4.915s 00:07:38.564 sys 0m0.652s 00:07:38.564 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.564 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.564 23:02:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:38.564 23:02:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:38.564 23:02:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.564 23:02:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.564 ************************************ 00:07:38.564 START TEST raid_read_error_test 00:07:38.564 ************************************ 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.J44JyL58cL 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73707 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73707 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73707 ']' 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.564 23:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.825 [2024-11-18 23:02:58.014310] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.825 [2024-11-18 23:02:58.014442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73707 ] 00:07:38.825 [2024-11-18 23:02:58.174583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.085 [2024-11-18 23:02:58.219445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.085 [2024-11-18 23:02:58.261655] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.085 [2024-11-18 23:02:58.261693] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.656 BaseBdev1_malloc 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.656 true 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.656 [2024-11-18 23:02:58.871740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.656 [2024-11-18 23:02:58.871792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.656 [2024-11-18 23:02:58.871810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.656 [2024-11-18 23:02:58.871819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.656 [2024-11-18 23:02:58.873874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.656 [2024-11-18 23:02:58.873907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.656 BaseBdev1 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.656 BaseBdev2_malloc 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.656 true 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.656 [2024-11-18 23:02:58.923022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.656 [2024-11-18 23:02:58.923082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.656 [2024-11-18 23:02:58.923106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.656 [2024-11-18 23:02:58.923116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.656 [2024-11-18 23:02:58.925762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.656 [2024-11-18 23:02:58.925796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.656 BaseBdev2 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.656 [2024-11-18 23:02:58.935000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.656 [2024-11-18 23:02:58.936803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.656 [2024-11-18 23:02:58.936966] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:39.656 [2024-11-18 23:02:58.936979] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.656 [2024-11-18 23:02:58.937244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:39.656 [2024-11-18 23:02:58.937387] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:39.656 [2024-11-18 23:02:58.937404] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:39.656 [2024-11-18 23:02:58.937528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:39.656 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.657 "name": "raid_bdev1", 00:07:39.657 "uuid": "b5bcd956-9acb-43fa-942c-56f4d4005407", 00:07:39.657 "strip_size_kb": 64, 00:07:39.657 "state": "online", 00:07:39.657 "raid_level": "concat", 00:07:39.657 "superblock": true, 00:07:39.657 "num_base_bdevs": 2, 00:07:39.657 "num_base_bdevs_discovered": 2, 00:07:39.657 "num_base_bdevs_operational": 2, 00:07:39.657 "base_bdevs_list": [ 00:07:39.657 { 00:07:39.657 "name": "BaseBdev1", 00:07:39.657 "uuid": "69534877-cc25-524e-a027-1aed99069582", 00:07:39.657 "is_configured": true, 00:07:39.657 "data_offset": 2048, 00:07:39.657 "data_size": 63488 00:07:39.657 }, 00:07:39.657 { 00:07:39.657 "name": "BaseBdev2", 00:07:39.657 "uuid": "71c0cb19-ff45-5458-85da-204220a5f6ae", 00:07:39.657 "is_configured": true, 00:07:39.657 "data_offset": 2048, 00:07:39.657 "data_size": 63488 00:07:39.657 } 00:07:39.657 ] 00:07:39.657 }' 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.657 23:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.226 23:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:40.226 23:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:40.226 [2024-11-18 23:02:59.466422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.165 "name": "raid_bdev1", 00:07:41.165 "uuid": "b5bcd956-9acb-43fa-942c-56f4d4005407", 00:07:41.165 "strip_size_kb": 64, 00:07:41.165 "state": "online", 00:07:41.165 "raid_level": "concat", 00:07:41.165 "superblock": true, 00:07:41.165 "num_base_bdevs": 2, 00:07:41.165 "num_base_bdevs_discovered": 2, 00:07:41.165 "num_base_bdevs_operational": 2, 00:07:41.165 "base_bdevs_list": [ 00:07:41.165 { 00:07:41.165 "name": "BaseBdev1", 00:07:41.165 "uuid": "69534877-cc25-524e-a027-1aed99069582", 00:07:41.165 "is_configured": true, 00:07:41.165 "data_offset": 2048, 00:07:41.165 "data_size": 63488 00:07:41.165 }, 00:07:41.165 { 00:07:41.165 "name": "BaseBdev2", 00:07:41.165 "uuid": "71c0cb19-ff45-5458-85da-204220a5f6ae", 00:07:41.165 "is_configured": true, 00:07:41.165 "data_offset": 2048, 00:07:41.165 "data_size": 63488 00:07:41.165 } 00:07:41.165 ] 00:07:41.165 }' 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.165 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.425 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.425 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.425 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.425 [2024-11-18 23:03:00.765639] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.425 [2024-11-18 23:03:00.765679] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.425 [2024-11-18 23:03:00.768079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.425 [2024-11-18 23:03:00.768125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.425 [2024-11-18 23:03:00.768158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.425 [2024-11-18 23:03:00.768177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:41.425 { 00:07:41.425 "results": [ 00:07:41.425 { 00:07:41.425 "job": "raid_bdev1", 00:07:41.425 "core_mask": "0x1", 00:07:41.425 "workload": "randrw", 00:07:41.425 "percentage": 50, 00:07:41.425 "status": "finished", 00:07:41.425 "queue_depth": 1, 00:07:41.425 "io_size": 131072, 00:07:41.425 "runtime": 1.29986, 00:07:41.425 "iops": 17991.168279660887, 00:07:41.425 "mibps": 2248.896034957611, 00:07:41.425 "io_failed": 1, 00:07:41.425 "io_timeout": 0, 00:07:41.425 "avg_latency_us": 76.75351427835753, 00:07:41.425 "min_latency_us": 24.258515283842794, 00:07:41.425 "max_latency_us": 1409.4532751091704 00:07:41.425 } 00:07:41.425 ], 00:07:41.425 "core_count": 1 00:07:41.425 } 00:07:41.425 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.425 23:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73707 00:07:41.425 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73707 ']' 00:07:41.425 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73707 00:07:41.425 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:41.425 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.425 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73707 00:07:41.692 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.692 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.692 killing process with pid 73707 00:07:41.692 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73707' 00:07:41.692 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73707 00:07:41.692 [2024-11-18 23:03:00.813179] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.693 23:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73707 00:07:41.693 [2024-11-18 23:03:00.828151] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.693 23:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:41.693 23:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:41.693 23:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.J44JyL58cL 00:07:41.693 23:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:07:41.693 23:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:41.693 23:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.693 23:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.693 23:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:07:41.693 00:07:41.693 real 0m3.148s 00:07:41.693 user 0m3.970s 00:07:41.693 sys 0m0.492s 00:07:41.693 23:03:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.693 23:03:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.693 ************************************ 00:07:41.693 END TEST raid_read_error_test 00:07:41.693 ************************************ 00:07:41.960 23:03:01 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:41.960 23:03:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:41.960 23:03:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.960 23:03:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.960 ************************************ 00:07:41.960 START TEST raid_write_error_test 00:07:41.960 ************************************ 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wAmOmJT6Ma 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73836 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73836 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73836 ']' 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.960 23:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.960 [2024-11-18 23:03:01.222835] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:41.960 [2024-11-18 23:03:01.222964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73836 ] 00:07:42.220 [2024-11-18 23:03:01.382943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.220 [2024-11-18 23:03:01.428455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.220 [2024-11-18 23:03:01.470763] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.220 [2024-11-18 23:03:01.470795] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.802 BaseBdev1_malloc 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.802 true 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.802 [2024-11-18 23:03:02.077268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:42.802 [2024-11-18 23:03:02.077349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.802 [2024-11-18 23:03:02.077387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:42.802 [2024-11-18 23:03:02.077396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.802 [2024-11-18 23:03:02.079491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.802 [2024-11-18 23:03:02.079526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:42.802 BaseBdev1 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.802 BaseBdev2_malloc 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.802 true 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.802 [2024-11-18 23:03:02.131544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:42.802 [2024-11-18 23:03:02.131613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.802 [2024-11-18 23:03:02.131640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:42.802 [2024-11-18 23:03:02.131653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.802 [2024-11-18 23:03:02.134733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.802 [2024-11-18 23:03:02.134780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:42.802 BaseBdev2 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.802 [2024-11-18 23:03:02.143656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.802 [2024-11-18 23:03:02.145700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.802 [2024-11-18 23:03:02.145885] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:42.802 [2024-11-18 23:03:02.145900] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.802 [2024-11-18 23:03:02.146169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:42.802 [2024-11-18 23:03:02.146337] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:42.802 [2024-11-18 23:03:02.146367] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:42.802 [2024-11-18 23:03:02.146508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.802 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.803 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.062 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.062 "name": "raid_bdev1", 00:07:43.062 "uuid": "cdaab28f-93f8-4f36-b1b8-ff8f82bcb6b7", 00:07:43.062 "strip_size_kb": 64, 00:07:43.062 "state": "online", 00:07:43.062 "raid_level": "concat", 00:07:43.062 "superblock": true, 00:07:43.062 "num_base_bdevs": 2, 00:07:43.062 "num_base_bdevs_discovered": 2, 00:07:43.062 "num_base_bdevs_operational": 2, 00:07:43.062 "base_bdevs_list": [ 00:07:43.062 { 00:07:43.062 "name": "BaseBdev1", 00:07:43.062 "uuid": "1b478a42-e142-5137-b486-59389246d7af", 00:07:43.062 "is_configured": true, 00:07:43.062 "data_offset": 2048, 00:07:43.062 "data_size": 63488 00:07:43.062 }, 00:07:43.062 { 00:07:43.062 "name": "BaseBdev2", 00:07:43.062 "uuid": "6189ba4a-2ba2-5585-9ca5-89efabcd138e", 00:07:43.062 "is_configured": true, 00:07:43.062 "data_offset": 2048, 00:07:43.062 "data_size": 63488 00:07:43.062 } 00:07:43.062 ] 00:07:43.062 }' 00:07:43.062 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.062 23:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.321 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:43.321 23:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:43.321 [2024-11-18 23:03:02.687279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.306 "name": "raid_bdev1", 00:07:44.306 "uuid": "cdaab28f-93f8-4f36-b1b8-ff8f82bcb6b7", 00:07:44.306 "strip_size_kb": 64, 00:07:44.306 "state": "online", 00:07:44.306 "raid_level": "concat", 00:07:44.306 "superblock": true, 00:07:44.306 "num_base_bdevs": 2, 00:07:44.306 "num_base_bdevs_discovered": 2, 00:07:44.306 "num_base_bdevs_operational": 2, 00:07:44.306 "base_bdevs_list": [ 00:07:44.306 { 00:07:44.306 "name": "BaseBdev1", 00:07:44.306 "uuid": "1b478a42-e142-5137-b486-59389246d7af", 00:07:44.306 "is_configured": true, 00:07:44.306 "data_offset": 2048, 00:07:44.306 "data_size": 63488 00:07:44.306 }, 00:07:44.306 { 00:07:44.306 "name": "BaseBdev2", 00:07:44.306 "uuid": "6189ba4a-2ba2-5585-9ca5-89efabcd138e", 00:07:44.306 "is_configured": true, 00:07:44.306 "data_offset": 2048, 00:07:44.306 "data_size": 63488 00:07:44.306 } 00:07:44.306 ] 00:07:44.306 }' 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.306 23:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.875 23:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:44.875 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.875 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.875 [2024-11-18 23:03:04.034772] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.875 [2024-11-18 23:03:04.034805] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.875 [2024-11-18 23:03:04.037246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.875 [2024-11-18 23:03:04.037318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.875 [2024-11-18 23:03:04.037351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.875 [2024-11-18 23:03:04.037360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:44.875 { 00:07:44.875 "results": [ 00:07:44.875 { 00:07:44.875 "job": "raid_bdev1", 00:07:44.875 "core_mask": "0x1", 00:07:44.875 "workload": "randrw", 00:07:44.875 "percentage": 50, 00:07:44.875 "status": "finished", 00:07:44.875 "queue_depth": 1, 00:07:44.875 "io_size": 131072, 00:07:44.875 "runtime": 1.348328, 00:07:44.875 "iops": 18000.071199292754, 00:07:44.875 "mibps": 2250.008899911594, 00:07:44.875 "io_failed": 1, 00:07:44.875 "io_timeout": 0, 00:07:44.875 "avg_latency_us": 76.81056599075325, 00:07:44.875 "min_latency_us": 24.370305676855896, 00:07:44.875 "max_latency_us": 1409.4532751091704 00:07:44.875 } 00:07:44.875 ], 00:07:44.876 "core_count": 1 00:07:44.876 } 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73836 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73836 ']' 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73836 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73836 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.876 killing process with pid 73836 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73836' 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73836 00:07:44.876 [2024-11-18 23:03:04.067490] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.876 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73836 00:07:44.876 [2024-11-18 23:03:04.082143] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.135 23:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wAmOmJT6Ma 00:07:45.135 23:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:45.135 23:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:45.135 23:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:45.135 23:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:45.135 23:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.135 23:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.135 23:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:45.135 00:07:45.135 real 0m3.196s 00:07:45.135 user 0m4.044s 00:07:45.135 sys 0m0.484s 00:07:45.135 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.135 23:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.135 ************************************ 00:07:45.135 END TEST raid_write_error_test 00:07:45.135 ************************************ 00:07:45.135 23:03:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:45.135 23:03:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:45.135 23:03:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:45.135 23:03:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.135 23:03:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.135 ************************************ 00:07:45.135 START TEST raid_state_function_test 00:07:45.135 ************************************ 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:45.135 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73963 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73963' 00:07:45.136 Process raid pid: 73963 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73963 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73963 ']' 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.136 23:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.136 [2024-11-18 23:03:04.484680] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.136 [2024-11-18 23:03:04.484800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.396 [2024-11-18 23:03:04.647961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.396 [2024-11-18 23:03:04.692146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.396 [2024-11-18 23:03:04.733841] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.396 [2024-11-18 23:03:04.733878] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.966 [2024-11-18 23:03:05.315029] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.966 [2024-11-18 23:03:05.315077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.966 [2024-11-18 23:03:05.315096] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.966 [2024-11-18 23:03:05.315123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.966 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.225 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.225 "name": "Existed_Raid", 00:07:46.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.225 "strip_size_kb": 0, 00:07:46.225 "state": "configuring", 00:07:46.225 "raid_level": "raid1", 00:07:46.225 "superblock": false, 00:07:46.225 "num_base_bdevs": 2, 00:07:46.225 "num_base_bdevs_discovered": 0, 00:07:46.225 "num_base_bdevs_operational": 2, 00:07:46.225 "base_bdevs_list": [ 00:07:46.225 { 00:07:46.225 "name": "BaseBdev1", 00:07:46.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.225 "is_configured": false, 00:07:46.225 "data_offset": 0, 00:07:46.225 "data_size": 0 00:07:46.225 }, 00:07:46.225 { 00:07:46.225 "name": "BaseBdev2", 00:07:46.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.225 "is_configured": false, 00:07:46.225 "data_offset": 0, 00:07:46.225 "data_size": 0 00:07:46.225 } 00:07:46.225 ] 00:07:46.225 }' 00:07:46.225 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.225 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.484 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.484 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.484 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.484 [2024-11-18 23:03:05.774141] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.484 [2024-11-18 23:03:05.774189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:46.484 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.484 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.484 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.485 [2024-11-18 23:03:05.786144] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.485 [2024-11-18 23:03:05.786182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.485 [2024-11-18 23:03:05.786205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.485 [2024-11-18 23:03:05.786214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.485 [2024-11-18 23:03:05.806987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.485 BaseBdev1 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.485 [ 00:07:46.485 { 00:07:46.485 "name": "BaseBdev1", 00:07:46.485 "aliases": [ 00:07:46.485 "b1ac8762-bbd4-4a3d-87d9-8afd1f439598" 00:07:46.485 ], 00:07:46.485 "product_name": "Malloc disk", 00:07:46.485 "block_size": 512, 00:07:46.485 "num_blocks": 65536, 00:07:46.485 "uuid": "b1ac8762-bbd4-4a3d-87d9-8afd1f439598", 00:07:46.485 "assigned_rate_limits": { 00:07:46.485 "rw_ios_per_sec": 0, 00:07:46.485 "rw_mbytes_per_sec": 0, 00:07:46.485 "r_mbytes_per_sec": 0, 00:07:46.485 "w_mbytes_per_sec": 0 00:07:46.485 }, 00:07:46.485 "claimed": true, 00:07:46.485 "claim_type": "exclusive_write", 00:07:46.485 "zoned": false, 00:07:46.485 "supported_io_types": { 00:07:46.485 "read": true, 00:07:46.485 "write": true, 00:07:46.485 "unmap": true, 00:07:46.485 "flush": true, 00:07:46.485 "reset": true, 00:07:46.485 "nvme_admin": false, 00:07:46.485 "nvme_io": false, 00:07:46.485 "nvme_io_md": false, 00:07:46.485 "write_zeroes": true, 00:07:46.485 "zcopy": true, 00:07:46.485 "get_zone_info": false, 00:07:46.485 "zone_management": false, 00:07:46.485 "zone_append": false, 00:07:46.485 "compare": false, 00:07:46.485 "compare_and_write": false, 00:07:46.485 "abort": true, 00:07:46.485 "seek_hole": false, 00:07:46.485 "seek_data": false, 00:07:46.485 "copy": true, 00:07:46.485 "nvme_iov_md": false 00:07:46.485 }, 00:07:46.485 "memory_domains": [ 00:07:46.485 { 00:07:46.485 "dma_device_id": "system", 00:07:46.485 "dma_device_type": 1 00:07:46.485 }, 00:07:46.485 { 00:07:46.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.485 "dma_device_type": 2 00:07:46.485 } 00:07:46.485 ], 00:07:46.485 "driver_specific": {} 00:07:46.485 } 00:07:46.485 ] 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.485 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.744 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.744 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.744 "name": "Existed_Raid", 00:07:46.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.744 "strip_size_kb": 0, 00:07:46.744 "state": "configuring", 00:07:46.744 "raid_level": "raid1", 00:07:46.744 "superblock": false, 00:07:46.744 "num_base_bdevs": 2, 00:07:46.744 "num_base_bdevs_discovered": 1, 00:07:46.744 "num_base_bdevs_operational": 2, 00:07:46.744 "base_bdevs_list": [ 00:07:46.744 { 00:07:46.744 "name": "BaseBdev1", 00:07:46.744 "uuid": "b1ac8762-bbd4-4a3d-87d9-8afd1f439598", 00:07:46.744 "is_configured": true, 00:07:46.744 "data_offset": 0, 00:07:46.744 "data_size": 65536 00:07:46.744 }, 00:07:46.744 { 00:07:46.744 "name": "BaseBdev2", 00:07:46.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.744 "is_configured": false, 00:07:46.744 "data_offset": 0, 00:07:46.744 "data_size": 0 00:07:46.744 } 00:07:46.744 ] 00:07:46.744 }' 00:07:46.744 23:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.744 23:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.005 [2024-11-18 23:03:06.270227] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.005 [2024-11-18 23:03:06.270333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.005 [2024-11-18 23:03:06.282229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.005 [2024-11-18 23:03:06.284070] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.005 [2024-11-18 23:03:06.284145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.005 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.005 "name": "Existed_Raid", 00:07:47.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.005 "strip_size_kb": 0, 00:07:47.005 "state": "configuring", 00:07:47.005 "raid_level": "raid1", 00:07:47.005 "superblock": false, 00:07:47.005 "num_base_bdevs": 2, 00:07:47.005 "num_base_bdevs_discovered": 1, 00:07:47.005 "num_base_bdevs_operational": 2, 00:07:47.005 "base_bdevs_list": [ 00:07:47.005 { 00:07:47.005 "name": "BaseBdev1", 00:07:47.005 "uuid": "b1ac8762-bbd4-4a3d-87d9-8afd1f439598", 00:07:47.005 "is_configured": true, 00:07:47.005 "data_offset": 0, 00:07:47.005 "data_size": 65536 00:07:47.005 }, 00:07:47.005 { 00:07:47.006 "name": "BaseBdev2", 00:07:47.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.006 "is_configured": false, 00:07:47.006 "data_offset": 0, 00:07:47.006 "data_size": 0 00:07:47.006 } 00:07:47.006 ] 00:07:47.006 }' 00:07:47.006 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.006 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.575 [2024-11-18 23:03:06.716182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.575 [2024-11-18 23:03:06.716233] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:47.575 [2024-11-18 23:03:06.716255] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:47.575 [2024-11-18 23:03:06.716622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:47.575 [2024-11-18 23:03:06.716785] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:47.575 [2024-11-18 23:03:06.716810] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:47.575 [2024-11-18 23:03:06.717051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.575 BaseBdev2 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.575 [ 00:07:47.575 { 00:07:47.575 "name": "BaseBdev2", 00:07:47.575 "aliases": [ 00:07:47.575 "d039bb3e-8738-475f-8864-313774591933" 00:07:47.575 ], 00:07:47.575 "product_name": "Malloc disk", 00:07:47.575 "block_size": 512, 00:07:47.575 "num_blocks": 65536, 00:07:47.575 "uuid": "d039bb3e-8738-475f-8864-313774591933", 00:07:47.575 "assigned_rate_limits": { 00:07:47.575 "rw_ios_per_sec": 0, 00:07:47.575 "rw_mbytes_per_sec": 0, 00:07:47.575 "r_mbytes_per_sec": 0, 00:07:47.575 "w_mbytes_per_sec": 0 00:07:47.575 }, 00:07:47.575 "claimed": true, 00:07:47.575 "claim_type": "exclusive_write", 00:07:47.575 "zoned": false, 00:07:47.575 "supported_io_types": { 00:07:47.575 "read": true, 00:07:47.575 "write": true, 00:07:47.575 "unmap": true, 00:07:47.575 "flush": true, 00:07:47.575 "reset": true, 00:07:47.575 "nvme_admin": false, 00:07:47.575 "nvme_io": false, 00:07:47.575 "nvme_io_md": false, 00:07:47.575 "write_zeroes": true, 00:07:47.575 "zcopy": true, 00:07:47.575 "get_zone_info": false, 00:07:47.575 "zone_management": false, 00:07:47.575 "zone_append": false, 00:07:47.575 "compare": false, 00:07:47.575 "compare_and_write": false, 00:07:47.575 "abort": true, 00:07:47.575 "seek_hole": false, 00:07:47.575 "seek_data": false, 00:07:47.575 "copy": true, 00:07:47.575 "nvme_iov_md": false 00:07:47.575 }, 00:07:47.575 "memory_domains": [ 00:07:47.575 { 00:07:47.575 "dma_device_id": "system", 00:07:47.575 "dma_device_type": 1 00:07:47.575 }, 00:07:47.575 { 00:07:47.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.575 "dma_device_type": 2 00:07:47.575 } 00:07:47.575 ], 00:07:47.575 "driver_specific": {} 00:07:47.575 } 00:07:47.575 ] 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.575 "name": "Existed_Raid", 00:07:47.575 "uuid": "0258fb26-5113-42ca-8bde-10d6fb1781ee", 00:07:47.575 "strip_size_kb": 0, 00:07:47.575 "state": "online", 00:07:47.575 "raid_level": "raid1", 00:07:47.575 "superblock": false, 00:07:47.575 "num_base_bdevs": 2, 00:07:47.575 "num_base_bdevs_discovered": 2, 00:07:47.575 "num_base_bdevs_operational": 2, 00:07:47.575 "base_bdevs_list": [ 00:07:47.575 { 00:07:47.575 "name": "BaseBdev1", 00:07:47.575 "uuid": "b1ac8762-bbd4-4a3d-87d9-8afd1f439598", 00:07:47.575 "is_configured": true, 00:07:47.575 "data_offset": 0, 00:07:47.575 "data_size": 65536 00:07:47.575 }, 00:07:47.575 { 00:07:47.575 "name": "BaseBdev2", 00:07:47.575 "uuid": "d039bb3e-8738-475f-8864-313774591933", 00:07:47.575 "is_configured": true, 00:07:47.575 "data_offset": 0, 00:07:47.575 "data_size": 65536 00:07:47.575 } 00:07:47.575 ] 00:07:47.575 }' 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.575 23:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.835 [2024-11-18 23:03:07.135744] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.835 "name": "Existed_Raid", 00:07:47.835 "aliases": [ 00:07:47.835 "0258fb26-5113-42ca-8bde-10d6fb1781ee" 00:07:47.835 ], 00:07:47.835 "product_name": "Raid Volume", 00:07:47.835 "block_size": 512, 00:07:47.835 "num_blocks": 65536, 00:07:47.835 "uuid": "0258fb26-5113-42ca-8bde-10d6fb1781ee", 00:07:47.835 "assigned_rate_limits": { 00:07:47.835 "rw_ios_per_sec": 0, 00:07:47.835 "rw_mbytes_per_sec": 0, 00:07:47.835 "r_mbytes_per_sec": 0, 00:07:47.835 "w_mbytes_per_sec": 0 00:07:47.835 }, 00:07:47.835 "claimed": false, 00:07:47.835 "zoned": false, 00:07:47.835 "supported_io_types": { 00:07:47.835 "read": true, 00:07:47.835 "write": true, 00:07:47.835 "unmap": false, 00:07:47.835 "flush": false, 00:07:47.835 "reset": true, 00:07:47.835 "nvme_admin": false, 00:07:47.835 "nvme_io": false, 00:07:47.835 "nvme_io_md": false, 00:07:47.835 "write_zeroes": true, 00:07:47.835 "zcopy": false, 00:07:47.835 "get_zone_info": false, 00:07:47.835 "zone_management": false, 00:07:47.835 "zone_append": false, 00:07:47.835 "compare": false, 00:07:47.835 "compare_and_write": false, 00:07:47.835 "abort": false, 00:07:47.835 "seek_hole": false, 00:07:47.835 "seek_data": false, 00:07:47.835 "copy": false, 00:07:47.835 "nvme_iov_md": false 00:07:47.835 }, 00:07:47.835 "memory_domains": [ 00:07:47.835 { 00:07:47.835 "dma_device_id": "system", 00:07:47.835 "dma_device_type": 1 00:07:47.835 }, 00:07:47.835 { 00:07:47.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.835 "dma_device_type": 2 00:07:47.835 }, 00:07:47.835 { 00:07:47.835 "dma_device_id": "system", 00:07:47.835 "dma_device_type": 1 00:07:47.835 }, 00:07:47.835 { 00:07:47.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.835 "dma_device_type": 2 00:07:47.835 } 00:07:47.835 ], 00:07:47.835 "driver_specific": { 00:07:47.835 "raid": { 00:07:47.835 "uuid": "0258fb26-5113-42ca-8bde-10d6fb1781ee", 00:07:47.835 "strip_size_kb": 0, 00:07:47.835 "state": "online", 00:07:47.835 "raid_level": "raid1", 00:07:47.835 "superblock": false, 00:07:47.835 "num_base_bdevs": 2, 00:07:47.835 "num_base_bdevs_discovered": 2, 00:07:47.835 "num_base_bdevs_operational": 2, 00:07:47.835 "base_bdevs_list": [ 00:07:47.835 { 00:07:47.835 "name": "BaseBdev1", 00:07:47.835 "uuid": "b1ac8762-bbd4-4a3d-87d9-8afd1f439598", 00:07:47.835 "is_configured": true, 00:07:47.835 "data_offset": 0, 00:07:47.835 "data_size": 65536 00:07:47.835 }, 00:07:47.835 { 00:07:47.835 "name": "BaseBdev2", 00:07:47.835 "uuid": "d039bb3e-8738-475f-8864-313774591933", 00:07:47.835 "is_configured": true, 00:07:47.835 "data_offset": 0, 00:07:47.835 "data_size": 65536 00:07:47.835 } 00:07:47.835 ] 00:07:47.835 } 00:07:47.835 } 00:07:47.835 }' 00:07:47.835 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:48.101 BaseBdev2' 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.101 [2024-11-18 23:03:07.375169] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.101 "name": "Existed_Raid", 00:07:48.101 "uuid": "0258fb26-5113-42ca-8bde-10d6fb1781ee", 00:07:48.101 "strip_size_kb": 0, 00:07:48.101 "state": "online", 00:07:48.101 "raid_level": "raid1", 00:07:48.101 "superblock": false, 00:07:48.101 "num_base_bdevs": 2, 00:07:48.101 "num_base_bdevs_discovered": 1, 00:07:48.101 "num_base_bdevs_operational": 1, 00:07:48.101 "base_bdevs_list": [ 00:07:48.101 { 00:07:48.101 "name": null, 00:07:48.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.101 "is_configured": false, 00:07:48.101 "data_offset": 0, 00:07:48.101 "data_size": 65536 00:07:48.101 }, 00:07:48.101 { 00:07:48.101 "name": "BaseBdev2", 00:07:48.101 "uuid": "d039bb3e-8738-475f-8864-313774591933", 00:07:48.101 "is_configured": true, 00:07:48.101 "data_offset": 0, 00:07:48.101 "data_size": 65536 00:07:48.101 } 00:07:48.101 ] 00:07:48.101 }' 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.101 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.670 [2024-11-18 23:03:07.885463] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.670 [2024-11-18 23:03:07.885594] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.670 [2024-11-18 23:03:07.896886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.670 [2024-11-18 23:03:07.896991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.670 [2024-11-18 23:03:07.897033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:48.670 23:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73963 00:07:48.671 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73963 ']' 00:07:48.671 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73963 00:07:48.671 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:48.671 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.671 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73963 00:07:48.671 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.671 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.671 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73963' 00:07:48.671 killing process with pid 73963 00:07:48.671 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73963 00:07:48.671 [2024-11-18 23:03:07.996226] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.671 23:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73963 00:07:48.671 [2024-11-18 23:03:07.997239] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.931 ************************************ 00:07:48.931 END TEST raid_state_function_test 00:07:48.931 ************************************ 00:07:48.931 23:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.931 00:07:48.931 real 0m3.845s 00:07:48.931 user 0m6.055s 00:07:48.931 sys 0m0.741s 00:07:48.931 23:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.931 23:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.931 23:03:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:48.931 23:03:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:48.931 23:03:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.931 23:03:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.931 ************************************ 00:07:48.931 START TEST raid_state_function_test_sb 00:07:48.931 ************************************ 00:07:48.931 23:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74200 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74200' 00:07:49.192 Process raid pid: 74200 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74200 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74200 ']' 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.192 23:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.192 [2024-11-18 23:03:08.398535] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.192 [2024-11-18 23:03:08.398725] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.192 [2024-11-18 23:03:08.544278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.451 [2024-11-18 23:03:08.588947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.451 [2024-11-18 23:03:08.631384] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.451 [2024-11-18 23:03:08.631495] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.018 [2024-11-18 23:03:09.205098] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.018 [2024-11-18 23:03:09.205150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.018 [2024-11-18 23:03:09.205162] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.018 [2024-11-18 23:03:09.205171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.018 "name": "Existed_Raid", 00:07:50.018 "uuid": "e11baa14-df82-42d9-aa90-9a4645d6e125", 00:07:50.018 "strip_size_kb": 0, 00:07:50.018 "state": "configuring", 00:07:50.018 "raid_level": "raid1", 00:07:50.018 "superblock": true, 00:07:50.018 "num_base_bdevs": 2, 00:07:50.018 "num_base_bdevs_discovered": 0, 00:07:50.018 "num_base_bdevs_operational": 2, 00:07:50.018 "base_bdevs_list": [ 00:07:50.018 { 00:07:50.018 "name": "BaseBdev1", 00:07:50.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.018 "is_configured": false, 00:07:50.018 "data_offset": 0, 00:07:50.018 "data_size": 0 00:07:50.018 }, 00:07:50.018 { 00:07:50.018 "name": "BaseBdev2", 00:07:50.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.018 "is_configured": false, 00:07:50.018 "data_offset": 0, 00:07:50.018 "data_size": 0 00:07:50.018 } 00:07:50.018 ] 00:07:50.018 }' 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.018 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.582 [2024-11-18 23:03:09.692164] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.582 [2024-11-18 23:03:09.692253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.582 [2024-11-18 23:03:09.704179] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.582 [2024-11-18 23:03:09.704255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.582 [2024-11-18 23:03:09.704288] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.582 [2024-11-18 23:03:09.704311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.582 [2024-11-18 23:03:09.724997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.582 BaseBdev1 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.582 [ 00:07:50.582 { 00:07:50.582 "name": "BaseBdev1", 00:07:50.582 "aliases": [ 00:07:50.582 "a683345f-90a8-4e1d-9509-2e7d5015e1d2" 00:07:50.582 ], 00:07:50.582 "product_name": "Malloc disk", 00:07:50.582 "block_size": 512, 00:07:50.582 "num_blocks": 65536, 00:07:50.582 "uuid": "a683345f-90a8-4e1d-9509-2e7d5015e1d2", 00:07:50.582 "assigned_rate_limits": { 00:07:50.582 "rw_ios_per_sec": 0, 00:07:50.582 "rw_mbytes_per_sec": 0, 00:07:50.582 "r_mbytes_per_sec": 0, 00:07:50.582 "w_mbytes_per_sec": 0 00:07:50.582 }, 00:07:50.582 "claimed": true, 00:07:50.582 "claim_type": "exclusive_write", 00:07:50.582 "zoned": false, 00:07:50.582 "supported_io_types": { 00:07:50.582 "read": true, 00:07:50.582 "write": true, 00:07:50.582 "unmap": true, 00:07:50.582 "flush": true, 00:07:50.582 "reset": true, 00:07:50.582 "nvme_admin": false, 00:07:50.582 "nvme_io": false, 00:07:50.582 "nvme_io_md": false, 00:07:50.582 "write_zeroes": true, 00:07:50.582 "zcopy": true, 00:07:50.582 "get_zone_info": false, 00:07:50.582 "zone_management": false, 00:07:50.582 "zone_append": false, 00:07:50.582 "compare": false, 00:07:50.582 "compare_and_write": false, 00:07:50.582 "abort": true, 00:07:50.582 "seek_hole": false, 00:07:50.582 "seek_data": false, 00:07:50.582 "copy": true, 00:07:50.582 "nvme_iov_md": false 00:07:50.582 }, 00:07:50.582 "memory_domains": [ 00:07:50.582 { 00:07:50.582 "dma_device_id": "system", 00:07:50.582 "dma_device_type": 1 00:07:50.582 }, 00:07:50.582 { 00:07:50.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.582 "dma_device_type": 2 00:07:50.582 } 00:07:50.582 ], 00:07:50.582 "driver_specific": {} 00:07:50.582 } 00:07:50.582 ] 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.582 "name": "Existed_Raid", 00:07:50.582 "uuid": "b3f4472d-8db5-4438-bc1b-928c54f6f6eb", 00:07:50.582 "strip_size_kb": 0, 00:07:50.582 "state": "configuring", 00:07:50.582 "raid_level": "raid1", 00:07:50.582 "superblock": true, 00:07:50.582 "num_base_bdevs": 2, 00:07:50.582 "num_base_bdevs_discovered": 1, 00:07:50.582 "num_base_bdevs_operational": 2, 00:07:50.582 "base_bdevs_list": [ 00:07:50.582 { 00:07:50.582 "name": "BaseBdev1", 00:07:50.582 "uuid": "a683345f-90a8-4e1d-9509-2e7d5015e1d2", 00:07:50.582 "is_configured": true, 00:07:50.582 "data_offset": 2048, 00:07:50.582 "data_size": 63488 00:07:50.582 }, 00:07:50.582 { 00:07:50.582 "name": "BaseBdev2", 00:07:50.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.582 "is_configured": false, 00:07:50.582 "data_offset": 0, 00:07:50.582 "data_size": 0 00:07:50.582 } 00:07:50.582 ] 00:07:50.582 }' 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.582 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.842 [2024-11-18 23:03:10.180266] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.842 [2024-11-18 23:03:10.180362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.842 [2024-11-18 23:03:10.188308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.842 [2024-11-18 23:03:10.190097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.842 [2024-11-18 23:03:10.190187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.842 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.102 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.102 "name": "Existed_Raid", 00:07:51.102 "uuid": "c9c215ee-0a9e-4a2b-8182-90320a6c4031", 00:07:51.102 "strip_size_kb": 0, 00:07:51.102 "state": "configuring", 00:07:51.102 "raid_level": "raid1", 00:07:51.102 "superblock": true, 00:07:51.102 "num_base_bdevs": 2, 00:07:51.102 "num_base_bdevs_discovered": 1, 00:07:51.102 "num_base_bdevs_operational": 2, 00:07:51.102 "base_bdevs_list": [ 00:07:51.102 { 00:07:51.102 "name": "BaseBdev1", 00:07:51.102 "uuid": "a683345f-90a8-4e1d-9509-2e7d5015e1d2", 00:07:51.102 "is_configured": true, 00:07:51.102 "data_offset": 2048, 00:07:51.102 "data_size": 63488 00:07:51.102 }, 00:07:51.102 { 00:07:51.102 "name": "BaseBdev2", 00:07:51.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.102 "is_configured": false, 00:07:51.102 "data_offset": 0, 00:07:51.102 "data_size": 0 00:07:51.102 } 00:07:51.102 ] 00:07:51.102 }' 00:07:51.102 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.102 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.362 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.363 [2024-11-18 23:03:10.683536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.363 [2024-11-18 23:03:10.684192] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:51.363 [2024-11-18 23:03:10.684251] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:51.363 BaseBdev2 00:07:51.363 [2024-11-18 23:03:10.685102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:51.363 [2024-11-18 23:03:10.685593] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.363 [2024-11-18 23:03:10.685817] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:51.363 [2024-11-18 23:03:10.686426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.363 [ 00:07:51.363 { 00:07:51.363 "name": "BaseBdev2", 00:07:51.363 "aliases": [ 00:07:51.363 "c66000e0-4ed0-421f-b91c-96848d50ebbb" 00:07:51.363 ], 00:07:51.363 "product_name": "Malloc disk", 00:07:51.363 "block_size": 512, 00:07:51.363 "num_blocks": 65536, 00:07:51.363 "uuid": "c66000e0-4ed0-421f-b91c-96848d50ebbb", 00:07:51.363 "assigned_rate_limits": { 00:07:51.363 "rw_ios_per_sec": 0, 00:07:51.363 "rw_mbytes_per_sec": 0, 00:07:51.363 "r_mbytes_per_sec": 0, 00:07:51.363 "w_mbytes_per_sec": 0 00:07:51.363 }, 00:07:51.363 "claimed": true, 00:07:51.363 "claim_type": "exclusive_write", 00:07:51.363 "zoned": false, 00:07:51.363 "supported_io_types": { 00:07:51.363 "read": true, 00:07:51.363 "write": true, 00:07:51.363 "unmap": true, 00:07:51.363 "flush": true, 00:07:51.363 "reset": true, 00:07:51.363 "nvme_admin": false, 00:07:51.363 "nvme_io": false, 00:07:51.363 "nvme_io_md": false, 00:07:51.363 "write_zeroes": true, 00:07:51.363 "zcopy": true, 00:07:51.363 "get_zone_info": false, 00:07:51.363 "zone_management": false, 00:07:51.363 "zone_append": false, 00:07:51.363 "compare": false, 00:07:51.363 "compare_and_write": false, 00:07:51.363 "abort": true, 00:07:51.363 "seek_hole": false, 00:07:51.363 "seek_data": false, 00:07:51.363 "copy": true, 00:07:51.363 "nvme_iov_md": false 00:07:51.363 }, 00:07:51.363 "memory_domains": [ 00:07:51.363 { 00:07:51.363 "dma_device_id": "system", 00:07:51.363 "dma_device_type": 1 00:07:51.363 }, 00:07:51.363 { 00:07:51.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.363 "dma_device_type": 2 00:07:51.363 } 00:07:51.363 ], 00:07:51.363 "driver_specific": {} 00:07:51.363 } 00:07:51.363 ] 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.363 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.627 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.627 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.627 "name": "Existed_Raid", 00:07:51.627 "uuid": "c9c215ee-0a9e-4a2b-8182-90320a6c4031", 00:07:51.627 "strip_size_kb": 0, 00:07:51.627 "state": "online", 00:07:51.627 "raid_level": "raid1", 00:07:51.627 "superblock": true, 00:07:51.627 "num_base_bdevs": 2, 00:07:51.627 "num_base_bdevs_discovered": 2, 00:07:51.627 "num_base_bdevs_operational": 2, 00:07:51.627 "base_bdevs_list": [ 00:07:51.627 { 00:07:51.627 "name": "BaseBdev1", 00:07:51.627 "uuid": "a683345f-90a8-4e1d-9509-2e7d5015e1d2", 00:07:51.627 "is_configured": true, 00:07:51.627 "data_offset": 2048, 00:07:51.627 "data_size": 63488 00:07:51.627 }, 00:07:51.627 { 00:07:51.627 "name": "BaseBdev2", 00:07:51.627 "uuid": "c66000e0-4ed0-421f-b91c-96848d50ebbb", 00:07:51.627 "is_configured": true, 00:07:51.628 "data_offset": 2048, 00:07:51.628 "data_size": 63488 00:07:51.628 } 00:07:51.628 ] 00:07:51.628 }' 00:07:51.628 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.628 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.892 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.893 [2024-11-18 23:03:11.174951] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.893 "name": "Existed_Raid", 00:07:51.893 "aliases": [ 00:07:51.893 "c9c215ee-0a9e-4a2b-8182-90320a6c4031" 00:07:51.893 ], 00:07:51.893 "product_name": "Raid Volume", 00:07:51.893 "block_size": 512, 00:07:51.893 "num_blocks": 63488, 00:07:51.893 "uuid": "c9c215ee-0a9e-4a2b-8182-90320a6c4031", 00:07:51.893 "assigned_rate_limits": { 00:07:51.893 "rw_ios_per_sec": 0, 00:07:51.893 "rw_mbytes_per_sec": 0, 00:07:51.893 "r_mbytes_per_sec": 0, 00:07:51.893 "w_mbytes_per_sec": 0 00:07:51.893 }, 00:07:51.893 "claimed": false, 00:07:51.893 "zoned": false, 00:07:51.893 "supported_io_types": { 00:07:51.893 "read": true, 00:07:51.893 "write": true, 00:07:51.893 "unmap": false, 00:07:51.893 "flush": false, 00:07:51.893 "reset": true, 00:07:51.893 "nvme_admin": false, 00:07:51.893 "nvme_io": false, 00:07:51.893 "nvme_io_md": false, 00:07:51.893 "write_zeroes": true, 00:07:51.893 "zcopy": false, 00:07:51.893 "get_zone_info": false, 00:07:51.893 "zone_management": false, 00:07:51.893 "zone_append": false, 00:07:51.893 "compare": false, 00:07:51.893 "compare_and_write": false, 00:07:51.893 "abort": false, 00:07:51.893 "seek_hole": false, 00:07:51.893 "seek_data": false, 00:07:51.893 "copy": false, 00:07:51.893 "nvme_iov_md": false 00:07:51.893 }, 00:07:51.893 "memory_domains": [ 00:07:51.893 { 00:07:51.893 "dma_device_id": "system", 00:07:51.893 "dma_device_type": 1 00:07:51.893 }, 00:07:51.893 { 00:07:51.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.893 "dma_device_type": 2 00:07:51.893 }, 00:07:51.893 { 00:07:51.893 "dma_device_id": "system", 00:07:51.893 "dma_device_type": 1 00:07:51.893 }, 00:07:51.893 { 00:07:51.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.893 "dma_device_type": 2 00:07:51.893 } 00:07:51.893 ], 00:07:51.893 "driver_specific": { 00:07:51.893 "raid": { 00:07:51.893 "uuid": "c9c215ee-0a9e-4a2b-8182-90320a6c4031", 00:07:51.893 "strip_size_kb": 0, 00:07:51.893 "state": "online", 00:07:51.893 "raid_level": "raid1", 00:07:51.893 "superblock": true, 00:07:51.893 "num_base_bdevs": 2, 00:07:51.893 "num_base_bdevs_discovered": 2, 00:07:51.893 "num_base_bdevs_operational": 2, 00:07:51.893 "base_bdevs_list": [ 00:07:51.893 { 00:07:51.893 "name": "BaseBdev1", 00:07:51.893 "uuid": "a683345f-90a8-4e1d-9509-2e7d5015e1d2", 00:07:51.893 "is_configured": true, 00:07:51.893 "data_offset": 2048, 00:07:51.893 "data_size": 63488 00:07:51.893 }, 00:07:51.893 { 00:07:51.893 "name": "BaseBdev2", 00:07:51.893 "uuid": "c66000e0-4ed0-421f-b91c-96848d50ebbb", 00:07:51.893 "is_configured": true, 00:07:51.893 "data_offset": 2048, 00:07:51.893 "data_size": 63488 00:07:51.893 } 00:07:51.893 ] 00:07:51.893 } 00:07:51.893 } 00:07:51.893 }' 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:51.893 BaseBdev2' 00:07:51.893 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 [2024-11-18 23:03:11.374378] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.153 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.153 "name": "Existed_Raid", 00:07:52.153 "uuid": "c9c215ee-0a9e-4a2b-8182-90320a6c4031", 00:07:52.153 "strip_size_kb": 0, 00:07:52.153 "state": "online", 00:07:52.153 "raid_level": "raid1", 00:07:52.153 "superblock": true, 00:07:52.153 "num_base_bdevs": 2, 00:07:52.153 "num_base_bdevs_discovered": 1, 00:07:52.154 "num_base_bdevs_operational": 1, 00:07:52.154 "base_bdevs_list": [ 00:07:52.154 { 00:07:52.154 "name": null, 00:07:52.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.154 "is_configured": false, 00:07:52.154 "data_offset": 0, 00:07:52.154 "data_size": 63488 00:07:52.154 }, 00:07:52.154 { 00:07:52.154 "name": "BaseBdev2", 00:07:52.154 "uuid": "c66000e0-4ed0-421f-b91c-96848d50ebbb", 00:07:52.154 "is_configured": true, 00:07:52.154 "data_offset": 2048, 00:07:52.154 "data_size": 63488 00:07:52.154 } 00:07:52.154 ] 00:07:52.154 }' 00:07:52.154 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.154 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.723 [2024-11-18 23:03:11.848701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:52.723 [2024-11-18 23:03:11.848793] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.723 [2024-11-18 23:03:11.860254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.723 [2024-11-18 23:03:11.860363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.723 [2024-11-18 23:03:11.860417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74200 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74200 ']' 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74200 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74200 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74200' 00:07:52.723 killing process with pid 74200 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74200 00:07:52.723 [2024-11-18 23:03:11.948300] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.723 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74200 00:07:52.723 [2024-11-18 23:03:11.949310] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.983 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:52.983 00:07:52.983 real 0m3.884s 00:07:52.983 user 0m6.154s 00:07:52.983 sys 0m0.725s 00:07:52.983 ************************************ 00:07:52.983 END TEST raid_state_function_test_sb 00:07:52.983 ************************************ 00:07:52.983 23:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.983 23:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.983 23:03:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:52.983 23:03:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:52.983 23:03:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.983 23:03:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.983 ************************************ 00:07:52.983 START TEST raid_superblock_test 00:07:52.983 ************************************ 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74441 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74441 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74441 ']' 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.983 23:03:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.983 [2024-11-18 23:03:12.346692] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:52.983 [2024-11-18 23:03:12.346899] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74441 ] 00:07:53.243 [2024-11-18 23:03:12.495929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.243 [2024-11-18 23:03:12.539823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.243 [2024-11-18 23:03:12.582507] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.243 [2024-11-18 23:03:12.582621] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.813 malloc1 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.813 [2024-11-18 23:03:13.181314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:53.813 [2024-11-18 23:03:13.181439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.813 [2024-11-18 23:03:13.181477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:53.813 [2024-11-18 23:03:13.181522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.813 [2024-11-18 23:03:13.183589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.813 [2024-11-18 23:03:13.183665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:53.813 pt1 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:53.813 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.814 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.814 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.814 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.074 malloc2 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.074 [2024-11-18 23:03:13.226987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:54.074 [2024-11-18 23:03:13.227058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.074 [2024-11-18 23:03:13.227080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:54.074 [2024-11-18 23:03:13.227095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.074 [2024-11-18 23:03:13.229990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.074 [2024-11-18 23:03:13.230038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:54.074 pt2 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.074 [2024-11-18 23:03:13.238981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.074 [2024-11-18 23:03:13.240859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:54.074 [2024-11-18 23:03:13.241056] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:54.074 [2024-11-18 23:03:13.241076] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:54.074 [2024-11-18 23:03:13.241346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:54.074 [2024-11-18 23:03:13.241483] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:54.074 [2024-11-18 23:03:13.241493] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:54.074 [2024-11-18 23:03:13.241625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.074 "name": "raid_bdev1", 00:07:54.074 "uuid": "bc940fc1-4d53-4b42-b29d-1f11e2724028", 00:07:54.074 "strip_size_kb": 0, 00:07:54.074 "state": "online", 00:07:54.074 "raid_level": "raid1", 00:07:54.074 "superblock": true, 00:07:54.074 "num_base_bdevs": 2, 00:07:54.074 "num_base_bdevs_discovered": 2, 00:07:54.074 "num_base_bdevs_operational": 2, 00:07:54.074 "base_bdevs_list": [ 00:07:54.074 { 00:07:54.074 "name": "pt1", 00:07:54.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.074 "is_configured": true, 00:07:54.074 "data_offset": 2048, 00:07:54.074 "data_size": 63488 00:07:54.074 }, 00:07:54.074 { 00:07:54.074 "name": "pt2", 00:07:54.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.074 "is_configured": true, 00:07:54.074 "data_offset": 2048, 00:07:54.074 "data_size": 63488 00:07:54.074 } 00:07:54.074 ] 00:07:54.074 }' 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.074 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.334 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:54.334 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:54.334 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.334 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.334 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.334 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.334 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.334 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.334 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.334 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.334 [2024-11-18 23:03:13.710442] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.593 "name": "raid_bdev1", 00:07:54.593 "aliases": [ 00:07:54.593 "bc940fc1-4d53-4b42-b29d-1f11e2724028" 00:07:54.593 ], 00:07:54.593 "product_name": "Raid Volume", 00:07:54.593 "block_size": 512, 00:07:54.593 "num_blocks": 63488, 00:07:54.593 "uuid": "bc940fc1-4d53-4b42-b29d-1f11e2724028", 00:07:54.593 "assigned_rate_limits": { 00:07:54.593 "rw_ios_per_sec": 0, 00:07:54.593 "rw_mbytes_per_sec": 0, 00:07:54.593 "r_mbytes_per_sec": 0, 00:07:54.593 "w_mbytes_per_sec": 0 00:07:54.593 }, 00:07:54.593 "claimed": false, 00:07:54.593 "zoned": false, 00:07:54.593 "supported_io_types": { 00:07:54.593 "read": true, 00:07:54.593 "write": true, 00:07:54.593 "unmap": false, 00:07:54.593 "flush": false, 00:07:54.593 "reset": true, 00:07:54.593 "nvme_admin": false, 00:07:54.593 "nvme_io": false, 00:07:54.593 "nvme_io_md": false, 00:07:54.593 "write_zeroes": true, 00:07:54.593 "zcopy": false, 00:07:54.593 "get_zone_info": false, 00:07:54.593 "zone_management": false, 00:07:54.593 "zone_append": false, 00:07:54.593 "compare": false, 00:07:54.593 "compare_and_write": false, 00:07:54.593 "abort": false, 00:07:54.593 "seek_hole": false, 00:07:54.593 "seek_data": false, 00:07:54.593 "copy": false, 00:07:54.593 "nvme_iov_md": false 00:07:54.593 }, 00:07:54.593 "memory_domains": [ 00:07:54.593 { 00:07:54.593 "dma_device_id": "system", 00:07:54.593 "dma_device_type": 1 00:07:54.593 }, 00:07:54.593 { 00:07:54.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.593 "dma_device_type": 2 00:07:54.593 }, 00:07:54.593 { 00:07:54.593 "dma_device_id": "system", 00:07:54.593 "dma_device_type": 1 00:07:54.593 }, 00:07:54.593 { 00:07:54.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.593 "dma_device_type": 2 00:07:54.593 } 00:07:54.593 ], 00:07:54.593 "driver_specific": { 00:07:54.593 "raid": { 00:07:54.593 "uuid": "bc940fc1-4d53-4b42-b29d-1f11e2724028", 00:07:54.593 "strip_size_kb": 0, 00:07:54.593 "state": "online", 00:07:54.593 "raid_level": "raid1", 00:07:54.593 "superblock": true, 00:07:54.593 "num_base_bdevs": 2, 00:07:54.593 "num_base_bdevs_discovered": 2, 00:07:54.593 "num_base_bdevs_operational": 2, 00:07:54.593 "base_bdevs_list": [ 00:07:54.593 { 00:07:54.593 "name": "pt1", 00:07:54.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.593 "is_configured": true, 00:07:54.593 "data_offset": 2048, 00:07:54.593 "data_size": 63488 00:07:54.593 }, 00:07:54.593 { 00:07:54.593 "name": "pt2", 00:07:54.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.593 "is_configured": true, 00:07:54.593 "data_offset": 2048, 00:07:54.593 "data_size": 63488 00:07:54.593 } 00:07:54.593 ] 00:07:54.593 } 00:07:54.593 } 00:07:54.593 }' 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:54.593 pt2' 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.593 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.593 [2024-11-18 23:03:13.965900] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.852 23:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc940fc1-4d53-4b42-b29d-1f11e2724028 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bc940fc1-4d53-4b42-b29d-1f11e2724028 ']' 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.852 [2024-11-18 23:03:14.009595] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.852 [2024-11-18 23:03:14.009619] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.852 [2024-11-18 23:03:14.009680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.852 [2024-11-18 23:03:14.009760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.852 [2024-11-18 23:03:14.009770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.852 [2024-11-18 23:03:14.149382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:54.852 [2024-11-18 23:03:14.151162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:54.852 [2024-11-18 23:03:14.151236] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:54.852 [2024-11-18 23:03:14.151293] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:54.852 [2024-11-18 23:03:14.151310] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.852 [2024-11-18 23:03:14.151318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:54.852 request: 00:07:54.852 { 00:07:54.852 "name": "raid_bdev1", 00:07:54.852 "raid_level": "raid1", 00:07:54.852 "base_bdevs": [ 00:07:54.852 "malloc1", 00:07:54.852 "malloc2" 00:07:54.852 ], 00:07:54.852 "superblock": false, 00:07:54.852 "method": "bdev_raid_create", 00:07:54.852 "req_id": 1 00:07:54.852 } 00:07:54.852 Got JSON-RPC error response 00:07:54.852 response: 00:07:54.852 { 00:07:54.852 "code": -17, 00:07:54.852 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:54.852 } 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:54.852 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.853 [2024-11-18 23:03:14.201254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.853 [2024-11-18 23:03:14.201307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.853 [2024-11-18 23:03:14.201324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:54.853 [2024-11-18 23:03:14.201332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.853 [2024-11-18 23:03:14.203339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.853 [2024-11-18 23:03:14.203371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.853 [2024-11-18 23:03:14.203433] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:54.853 [2024-11-18 23:03:14.203469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.853 pt1 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.853 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.112 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.112 "name": "raid_bdev1", 00:07:55.112 "uuid": "bc940fc1-4d53-4b42-b29d-1f11e2724028", 00:07:55.112 "strip_size_kb": 0, 00:07:55.112 "state": "configuring", 00:07:55.112 "raid_level": "raid1", 00:07:55.112 "superblock": true, 00:07:55.112 "num_base_bdevs": 2, 00:07:55.112 "num_base_bdevs_discovered": 1, 00:07:55.112 "num_base_bdevs_operational": 2, 00:07:55.112 "base_bdevs_list": [ 00:07:55.112 { 00:07:55.112 "name": "pt1", 00:07:55.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.112 "is_configured": true, 00:07:55.112 "data_offset": 2048, 00:07:55.112 "data_size": 63488 00:07:55.112 }, 00:07:55.112 { 00:07:55.112 "name": null, 00:07:55.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.112 "is_configured": false, 00:07:55.112 "data_offset": 2048, 00:07:55.112 "data_size": 63488 00:07:55.112 } 00:07:55.112 ] 00:07:55.112 }' 00:07:55.112 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.112 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.372 [2024-11-18 23:03:14.616554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.372 [2024-11-18 23:03:14.616661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.372 [2024-11-18 23:03:14.616700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:55.372 [2024-11-18 23:03:14.616727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.372 [2024-11-18 23:03:14.617126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.372 [2024-11-18 23:03:14.617184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.372 [2024-11-18 23:03:14.617275] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:55.372 [2024-11-18 23:03:14.617333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.372 [2024-11-18 23:03:14.617440] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:55.372 [2024-11-18 23:03:14.617476] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:55.372 [2024-11-18 23:03:14.617718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:55.372 [2024-11-18 23:03:14.617871] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:55.372 [2024-11-18 23:03:14.617917] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:55.372 [2024-11-18 23:03:14.618053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.372 pt2 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.372 "name": "raid_bdev1", 00:07:55.372 "uuid": "bc940fc1-4d53-4b42-b29d-1f11e2724028", 00:07:55.372 "strip_size_kb": 0, 00:07:55.372 "state": "online", 00:07:55.372 "raid_level": "raid1", 00:07:55.372 "superblock": true, 00:07:55.372 "num_base_bdevs": 2, 00:07:55.372 "num_base_bdevs_discovered": 2, 00:07:55.372 "num_base_bdevs_operational": 2, 00:07:55.372 "base_bdevs_list": [ 00:07:55.372 { 00:07:55.372 "name": "pt1", 00:07:55.372 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.372 "is_configured": true, 00:07:55.372 "data_offset": 2048, 00:07:55.372 "data_size": 63488 00:07:55.372 }, 00:07:55.372 { 00:07:55.372 "name": "pt2", 00:07:55.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.372 "is_configured": true, 00:07:55.372 "data_offset": 2048, 00:07:55.372 "data_size": 63488 00:07:55.372 } 00:07:55.372 ] 00:07:55.372 }' 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.372 23:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.940 [2024-11-18 23:03:15.075999] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.940 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.940 "name": "raid_bdev1", 00:07:55.940 "aliases": [ 00:07:55.940 "bc940fc1-4d53-4b42-b29d-1f11e2724028" 00:07:55.940 ], 00:07:55.940 "product_name": "Raid Volume", 00:07:55.940 "block_size": 512, 00:07:55.940 "num_blocks": 63488, 00:07:55.940 "uuid": "bc940fc1-4d53-4b42-b29d-1f11e2724028", 00:07:55.940 "assigned_rate_limits": { 00:07:55.940 "rw_ios_per_sec": 0, 00:07:55.940 "rw_mbytes_per_sec": 0, 00:07:55.940 "r_mbytes_per_sec": 0, 00:07:55.940 "w_mbytes_per_sec": 0 00:07:55.940 }, 00:07:55.940 "claimed": false, 00:07:55.940 "zoned": false, 00:07:55.940 "supported_io_types": { 00:07:55.940 "read": true, 00:07:55.940 "write": true, 00:07:55.940 "unmap": false, 00:07:55.940 "flush": false, 00:07:55.940 "reset": true, 00:07:55.940 "nvme_admin": false, 00:07:55.940 "nvme_io": false, 00:07:55.940 "nvme_io_md": false, 00:07:55.940 "write_zeroes": true, 00:07:55.940 "zcopy": false, 00:07:55.940 "get_zone_info": false, 00:07:55.941 "zone_management": false, 00:07:55.941 "zone_append": false, 00:07:55.941 "compare": false, 00:07:55.941 "compare_and_write": false, 00:07:55.941 "abort": false, 00:07:55.941 "seek_hole": false, 00:07:55.941 "seek_data": false, 00:07:55.941 "copy": false, 00:07:55.941 "nvme_iov_md": false 00:07:55.941 }, 00:07:55.941 "memory_domains": [ 00:07:55.941 { 00:07:55.941 "dma_device_id": "system", 00:07:55.941 "dma_device_type": 1 00:07:55.941 }, 00:07:55.941 { 00:07:55.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.941 "dma_device_type": 2 00:07:55.941 }, 00:07:55.941 { 00:07:55.941 "dma_device_id": "system", 00:07:55.941 "dma_device_type": 1 00:07:55.941 }, 00:07:55.941 { 00:07:55.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.941 "dma_device_type": 2 00:07:55.941 } 00:07:55.941 ], 00:07:55.941 "driver_specific": { 00:07:55.941 "raid": { 00:07:55.941 "uuid": "bc940fc1-4d53-4b42-b29d-1f11e2724028", 00:07:55.941 "strip_size_kb": 0, 00:07:55.941 "state": "online", 00:07:55.941 "raid_level": "raid1", 00:07:55.941 "superblock": true, 00:07:55.941 "num_base_bdevs": 2, 00:07:55.941 "num_base_bdevs_discovered": 2, 00:07:55.941 "num_base_bdevs_operational": 2, 00:07:55.941 "base_bdevs_list": [ 00:07:55.941 { 00:07:55.941 "name": "pt1", 00:07:55.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.941 "is_configured": true, 00:07:55.941 "data_offset": 2048, 00:07:55.941 "data_size": 63488 00:07:55.941 }, 00:07:55.941 { 00:07:55.941 "name": "pt2", 00:07:55.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.941 "is_configured": true, 00:07:55.941 "data_offset": 2048, 00:07:55.941 "data_size": 63488 00:07:55.941 } 00:07:55.941 ] 00:07:55.941 } 00:07:55.941 } 00:07:55.941 }' 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:55.941 pt2' 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.941 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.941 [2024-11-18 23:03:15.303580] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bc940fc1-4d53-4b42-b29d-1f11e2724028 '!=' bc940fc1-4d53-4b42-b29d-1f11e2724028 ']' 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.200 [2024-11-18 23:03:15.347303] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.200 "name": "raid_bdev1", 00:07:56.200 "uuid": "bc940fc1-4d53-4b42-b29d-1f11e2724028", 00:07:56.200 "strip_size_kb": 0, 00:07:56.200 "state": "online", 00:07:56.200 "raid_level": "raid1", 00:07:56.200 "superblock": true, 00:07:56.200 "num_base_bdevs": 2, 00:07:56.200 "num_base_bdevs_discovered": 1, 00:07:56.200 "num_base_bdevs_operational": 1, 00:07:56.200 "base_bdevs_list": [ 00:07:56.200 { 00:07:56.200 "name": null, 00:07:56.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.200 "is_configured": false, 00:07:56.200 "data_offset": 0, 00:07:56.200 "data_size": 63488 00:07:56.200 }, 00:07:56.200 { 00:07:56.200 "name": "pt2", 00:07:56.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.200 "is_configured": true, 00:07:56.200 "data_offset": 2048, 00:07:56.200 "data_size": 63488 00:07:56.200 } 00:07:56.200 ] 00:07:56.200 }' 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.200 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.459 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.459 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.459 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.459 [2024-11-18 23:03:15.790484] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.459 [2024-11-18 23:03:15.790551] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.459 [2024-11-18 23:03:15.790638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.459 [2024-11-18 23:03:15.790697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.459 [2024-11-18 23:03:15.790707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:56.459 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.459 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.459 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:56.459 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.459 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.459 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.718 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:56.718 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:56.718 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:56.718 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.719 [2024-11-18 23:03:15.866371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.719 [2024-11-18 23:03:15.866417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.719 [2024-11-18 23:03:15.866433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:56.719 [2024-11-18 23:03:15.866442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.719 [2024-11-18 23:03:15.868586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.719 [2024-11-18 23:03:15.868662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.719 [2024-11-18 23:03:15.868743] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:56.719 [2024-11-18 23:03:15.868777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.719 [2024-11-18 23:03:15.868854] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:56.719 [2024-11-18 23:03:15.868862] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:56.719 [2024-11-18 23:03:15.869079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:56.719 [2024-11-18 23:03:15.869195] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:56.719 [2024-11-18 23:03:15.869208] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:07:56.719 [2024-11-18 23:03:15.869329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.719 pt2 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.719 "name": "raid_bdev1", 00:07:56.719 "uuid": "bc940fc1-4d53-4b42-b29d-1f11e2724028", 00:07:56.719 "strip_size_kb": 0, 00:07:56.719 "state": "online", 00:07:56.719 "raid_level": "raid1", 00:07:56.719 "superblock": true, 00:07:56.719 "num_base_bdevs": 2, 00:07:56.719 "num_base_bdevs_discovered": 1, 00:07:56.719 "num_base_bdevs_operational": 1, 00:07:56.719 "base_bdevs_list": [ 00:07:56.719 { 00:07:56.719 "name": null, 00:07:56.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.719 "is_configured": false, 00:07:56.719 "data_offset": 2048, 00:07:56.719 "data_size": 63488 00:07:56.719 }, 00:07:56.719 { 00:07:56.719 "name": "pt2", 00:07:56.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.719 "is_configured": true, 00:07:56.719 "data_offset": 2048, 00:07:56.719 "data_size": 63488 00:07:56.719 } 00:07:56.719 ] 00:07:56.719 }' 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.719 23:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.980 [2024-11-18 23:03:16.277664] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.980 [2024-11-18 23:03:16.277730] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.980 [2024-11-18 23:03:16.277804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.980 [2024-11-18 23:03:16.277860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.980 [2024-11-18 23:03:16.277894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.980 [2024-11-18 23:03:16.325552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.980 [2024-11-18 23:03:16.325636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.980 [2024-11-18 23:03:16.325672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:56.980 [2024-11-18 23:03:16.325708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.980 [2024-11-18 23:03:16.327760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.980 [2024-11-18 23:03:16.327845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:56.980 [2024-11-18 23:03:16.327927] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:56.980 [2024-11-18 23:03:16.327985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:56.980 [2024-11-18 23:03:16.328104] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:56.980 [2024-11-18 23:03:16.328165] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.980 [2024-11-18 23:03:16.328217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:07:56.980 [2024-11-18 23:03:16.328301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.980 [2024-11-18 23:03:16.328406] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:56.980 [2024-11-18 23:03:16.328448] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:56.980 [2024-11-18 23:03:16.328684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:56.980 [2024-11-18 23:03:16.328830] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:56.980 [2024-11-18 23:03:16.328871] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:56.980 [2024-11-18 23:03:16.329018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.980 pt1 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.980 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.240 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.240 "name": "raid_bdev1", 00:07:57.240 "uuid": "bc940fc1-4d53-4b42-b29d-1f11e2724028", 00:07:57.240 "strip_size_kb": 0, 00:07:57.240 "state": "online", 00:07:57.240 "raid_level": "raid1", 00:07:57.240 "superblock": true, 00:07:57.240 "num_base_bdevs": 2, 00:07:57.240 "num_base_bdevs_discovered": 1, 00:07:57.240 "num_base_bdevs_operational": 1, 00:07:57.240 "base_bdevs_list": [ 00:07:57.240 { 00:07:57.240 "name": null, 00:07:57.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.240 "is_configured": false, 00:07:57.240 "data_offset": 2048, 00:07:57.240 "data_size": 63488 00:07:57.240 }, 00:07:57.240 { 00:07:57.240 "name": "pt2", 00:07:57.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.240 "is_configured": true, 00:07:57.240 "data_offset": 2048, 00:07:57.240 "data_size": 63488 00:07:57.240 } 00:07:57.240 ] 00:07:57.240 }' 00:07:57.240 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.240 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.500 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:57.500 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.500 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.500 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:57.500 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.500 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:57.500 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.500 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:57.501 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.501 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.501 [2024-11-18 23:03:16.816950] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.501 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.501 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' bc940fc1-4d53-4b42-b29d-1f11e2724028 '!=' bc940fc1-4d53-4b42-b29d-1f11e2724028 ']' 00:07:57.501 23:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74441 00:07:57.501 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74441 ']' 00:07:57.501 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74441 00:07:57.501 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:57.501 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.501 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74441 00:07:57.761 killing process with pid 74441 00:07:57.761 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.761 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.761 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74441' 00:07:57.761 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74441 00:07:57.761 23:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74441 00:07:57.761 [2024-11-18 23:03:16.899415] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.761 [2024-11-18 23:03:16.899494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.761 [2024-11-18 23:03:16.899561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.761 [2024-11-18 23:03:16.899570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:57.761 [2024-11-18 23:03:16.921711] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.020 23:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:58.020 00:07:58.020 real 0m4.891s 00:07:58.020 user 0m8.032s 00:07:58.020 sys 0m0.976s 00:07:58.020 23:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.020 23:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.020 ************************************ 00:07:58.020 END TEST raid_superblock_test 00:07:58.021 ************************************ 00:07:58.021 23:03:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:58.021 23:03:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:58.021 23:03:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.021 23:03:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.021 ************************************ 00:07:58.021 START TEST raid_read_error_test 00:07:58.021 ************************************ 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4CXZjO9HTT 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74760 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74760 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74760 ']' 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.021 23:03:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.021 [2024-11-18 23:03:17.322622] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:58.021 [2024-11-18 23:03:17.322835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74760 ] 00:07:58.282 [2024-11-18 23:03:17.480751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.282 [2024-11-18 23:03:17.525393] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.282 [2024-11-18 23:03:17.567542] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.282 [2024-11-18 23:03:17.567577] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.850 BaseBdev1_malloc 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.850 true 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.850 [2024-11-18 23:03:18.173773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:58.850 [2024-11-18 23:03:18.173821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.850 [2024-11-18 23:03:18.173862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:58.850 [2024-11-18 23:03:18.173871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.850 [2024-11-18 23:03:18.175915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.850 [2024-11-18 23:03:18.175952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:58.850 BaseBdev1 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.850 BaseBdev2_malloc 00:07:58.850 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.851 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:58.851 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.851 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.851 true 00:07:58.851 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.851 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:58.851 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.851 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.110 [2024-11-18 23:03:18.231316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:59.110 [2024-11-18 23:03:18.231380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.110 [2024-11-18 23:03:18.231408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:59.110 [2024-11-18 23:03:18.231422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.110 [2024-11-18 23:03:18.233902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.110 [2024-11-18 23:03:18.233939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:59.110 BaseBdev2 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.110 [2024-11-18 23:03:18.243265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.110 [2024-11-18 23:03:18.245113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.110 [2024-11-18 23:03:18.245380] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:59.110 [2024-11-18 23:03:18.245402] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.110 [2024-11-18 23:03:18.245642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:59.110 [2024-11-18 23:03:18.245770] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:59.110 [2024-11-18 23:03:18.245783] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:59.110 [2024-11-18 23:03:18.245912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.110 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.111 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.111 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.111 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.111 "name": "raid_bdev1", 00:07:59.111 "uuid": "a1066078-5923-40a6-a291-58fa36d5fc69", 00:07:59.111 "strip_size_kb": 0, 00:07:59.111 "state": "online", 00:07:59.111 "raid_level": "raid1", 00:07:59.111 "superblock": true, 00:07:59.111 "num_base_bdevs": 2, 00:07:59.111 "num_base_bdevs_discovered": 2, 00:07:59.111 "num_base_bdevs_operational": 2, 00:07:59.111 "base_bdevs_list": [ 00:07:59.111 { 00:07:59.111 "name": "BaseBdev1", 00:07:59.111 "uuid": "2a16a0a9-5971-5233-819f-bde68d0962e5", 00:07:59.111 "is_configured": true, 00:07:59.111 "data_offset": 2048, 00:07:59.111 "data_size": 63488 00:07:59.111 }, 00:07:59.111 { 00:07:59.111 "name": "BaseBdev2", 00:07:59.111 "uuid": "eab77205-f3bc-53bb-b3ed-55333617c1c5", 00:07:59.111 "is_configured": true, 00:07:59.111 "data_offset": 2048, 00:07:59.111 "data_size": 63488 00:07:59.111 } 00:07:59.111 ] 00:07:59.111 }' 00:07:59.111 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.111 23:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.369 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:59.369 23:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:59.629 [2024-11-18 23:03:18.790650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.571 "name": "raid_bdev1", 00:08:00.571 "uuid": "a1066078-5923-40a6-a291-58fa36d5fc69", 00:08:00.571 "strip_size_kb": 0, 00:08:00.571 "state": "online", 00:08:00.571 "raid_level": "raid1", 00:08:00.571 "superblock": true, 00:08:00.571 "num_base_bdevs": 2, 00:08:00.571 "num_base_bdevs_discovered": 2, 00:08:00.571 "num_base_bdevs_operational": 2, 00:08:00.571 "base_bdevs_list": [ 00:08:00.571 { 00:08:00.571 "name": "BaseBdev1", 00:08:00.571 "uuid": "2a16a0a9-5971-5233-819f-bde68d0962e5", 00:08:00.571 "is_configured": true, 00:08:00.571 "data_offset": 2048, 00:08:00.571 "data_size": 63488 00:08:00.571 }, 00:08:00.571 { 00:08:00.571 "name": "BaseBdev2", 00:08:00.571 "uuid": "eab77205-f3bc-53bb-b3ed-55333617c1c5", 00:08:00.571 "is_configured": true, 00:08:00.571 "data_offset": 2048, 00:08:00.571 "data_size": 63488 00:08:00.571 } 00:08:00.571 ] 00:08:00.571 }' 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.571 23:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.831 23:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:00.831 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.831 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.832 [2024-11-18 23:03:20.158090] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.832 [2024-11-18 23:03:20.158122] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.832 [2024-11-18 23:03:20.160516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.832 [2024-11-18 23:03:20.160569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.832 [2024-11-18 23:03:20.160650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.832 [2024-11-18 23:03:20.160659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:00.832 { 00:08:00.832 "results": [ 00:08:00.832 { 00:08:00.832 "job": "raid_bdev1", 00:08:00.832 "core_mask": "0x1", 00:08:00.832 "workload": "randrw", 00:08:00.832 "percentage": 50, 00:08:00.832 "status": "finished", 00:08:00.832 "queue_depth": 1, 00:08:00.832 "io_size": 131072, 00:08:00.832 "runtime": 1.368223, 00:08:00.832 "iops": 20410.415553604933, 00:08:00.832 "mibps": 2551.3019442006166, 00:08:00.832 "io_failed": 0, 00:08:00.832 "io_timeout": 0, 00:08:00.832 "avg_latency_us": 46.56788824613521, 00:08:00.832 "min_latency_us": 21.575545851528386, 00:08:00.832 "max_latency_us": 1473.844541484716 00:08:00.832 } 00:08:00.832 ], 00:08:00.832 "core_count": 1 00:08:00.832 } 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74760 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74760 ']' 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74760 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74760 00:08:00.832 killing process with pid 74760 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74760' 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74760 00:08:00.832 [2024-11-18 23:03:20.197111] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.832 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74760 00:08:01.092 [2024-11-18 23:03:20.212613] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.092 23:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4CXZjO9HTT 00:08:01.092 23:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:01.092 23:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:01.092 23:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:01.092 23:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:01.092 ************************************ 00:08:01.092 END TEST raid_read_error_test 00:08:01.092 ************************************ 00:08:01.092 23:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:01.092 23:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:01.092 23:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:01.092 00:08:01.092 real 0m3.224s 00:08:01.092 user 0m4.080s 00:08:01.092 sys 0m0.506s 00:08:01.092 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.092 23:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.353 23:03:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:01.353 23:03:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:01.353 23:03:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.353 23:03:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.353 ************************************ 00:08:01.353 START TEST raid_write_error_test 00:08:01.353 ************************************ 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4Gl88Lt6wp 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74889 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74889 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74889 ']' 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.353 23:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.353 [2024-11-18 23:03:20.622346] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:01.353 [2024-11-18 23:03:20.622561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74889 ] 00:08:01.613 [2024-11-18 23:03:20.782126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.613 [2024-11-18 23:03:20.825975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.613 [2024-11-18 23:03:20.868033] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.613 [2024-11-18 23:03:20.868146] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.214 BaseBdev1_malloc 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.214 true 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.214 [2024-11-18 23:03:21.466174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:02.214 [2024-11-18 23:03:21.466237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.214 [2024-11-18 23:03:21.466258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:02.214 [2024-11-18 23:03:21.466266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.214 [2024-11-18 23:03:21.468333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.214 [2024-11-18 23:03:21.468425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:02.214 BaseBdev1 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.214 BaseBdev2_malloc 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.214 true 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.214 [2024-11-18 23:03:21.513392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:02.214 [2024-11-18 23:03:21.513436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.214 [2024-11-18 23:03:21.513468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:02.214 [2024-11-18 23:03:21.513476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.214 [2024-11-18 23:03:21.515443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.214 [2024-11-18 23:03:21.515476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:02.214 BaseBdev2 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.214 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.215 [2024-11-18 23:03:21.525406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.215 [2024-11-18 23:03:21.527157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.215 [2024-11-18 23:03:21.527330] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:02.215 [2024-11-18 23:03:21.527344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:02.215 [2024-11-18 23:03:21.527581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:02.215 [2024-11-18 23:03:21.527726] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:02.215 [2024-11-18 23:03:21.527739] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:02.215 [2024-11-18 23:03:21.527852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.215 "name": "raid_bdev1", 00:08:02.215 "uuid": "e5064fa3-1d5b-48a3-a44a-887cd4ce74f3", 00:08:02.215 "strip_size_kb": 0, 00:08:02.215 "state": "online", 00:08:02.215 "raid_level": "raid1", 00:08:02.215 "superblock": true, 00:08:02.215 "num_base_bdevs": 2, 00:08:02.215 "num_base_bdevs_discovered": 2, 00:08:02.215 "num_base_bdevs_operational": 2, 00:08:02.215 "base_bdevs_list": [ 00:08:02.215 { 00:08:02.215 "name": "BaseBdev1", 00:08:02.215 "uuid": "e8ac23f9-7a3e-561b-9841-2e24d0d17fd4", 00:08:02.215 "is_configured": true, 00:08:02.215 "data_offset": 2048, 00:08:02.215 "data_size": 63488 00:08:02.215 }, 00:08:02.215 { 00:08:02.215 "name": "BaseBdev2", 00:08:02.215 "uuid": "7db9841b-fc04-5978-a562-3416ec437efa", 00:08:02.215 "is_configured": true, 00:08:02.215 "data_offset": 2048, 00:08:02.215 "data_size": 63488 00:08:02.215 } 00:08:02.215 ] 00:08:02.215 }' 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.215 23:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.783 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:02.783 23:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:02.783 [2024-11-18 23:03:22.052851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.724 [2024-11-18 23:03:22.976442] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:03.724 [2024-11-18 23:03:22.976592] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:03.724 [2024-11-18 23:03:22.976818] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.724 23:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.724 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.724 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.724 "name": "raid_bdev1", 00:08:03.724 "uuid": "e5064fa3-1d5b-48a3-a44a-887cd4ce74f3", 00:08:03.724 "strip_size_kb": 0, 00:08:03.724 "state": "online", 00:08:03.724 "raid_level": "raid1", 00:08:03.724 "superblock": true, 00:08:03.724 "num_base_bdevs": 2, 00:08:03.724 "num_base_bdevs_discovered": 1, 00:08:03.724 "num_base_bdevs_operational": 1, 00:08:03.724 "base_bdevs_list": [ 00:08:03.724 { 00:08:03.724 "name": null, 00:08:03.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.724 "is_configured": false, 00:08:03.724 "data_offset": 0, 00:08:03.724 "data_size": 63488 00:08:03.724 }, 00:08:03.724 { 00:08:03.724 "name": "BaseBdev2", 00:08:03.724 "uuid": "7db9841b-fc04-5978-a562-3416ec437efa", 00:08:03.724 "is_configured": true, 00:08:03.724 "data_offset": 2048, 00:08:03.724 "data_size": 63488 00:08:03.724 } 00:08:03.724 ] 00:08:03.724 }' 00:08:03.724 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.724 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.294 [2024-11-18 23:03:23.425359] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.294 [2024-11-18 23:03:23.425439] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.294 [2024-11-18 23:03:23.427916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.294 [2024-11-18 23:03:23.427996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.294 [2024-11-18 23:03:23.428064] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.294 [2024-11-18 23:03:23.428123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74889 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74889 ']' 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74889 00:08:04.294 { 00:08:04.294 "results": [ 00:08:04.294 { 00:08:04.294 "job": "raid_bdev1", 00:08:04.294 "core_mask": "0x1", 00:08:04.294 "workload": "randrw", 00:08:04.294 "percentage": 50, 00:08:04.294 "status": "finished", 00:08:04.294 "queue_depth": 1, 00:08:04.294 "io_size": 131072, 00:08:04.294 "runtime": 1.373437, 00:08:04.294 "iops": 23934.843753299207, 00:08:04.294 "mibps": 2991.855469162401, 00:08:04.294 "io_failed": 0, 00:08:04.294 "io_timeout": 0, 00:08:04.294 "avg_latency_us": 39.35546898298693, 00:08:04.294 "min_latency_us": 20.79301310043668, 00:08:04.294 "max_latency_us": 1359.3711790393013 00:08:04.294 } 00:08:04.294 ], 00:08:04.294 "core_count": 1 00:08:04.294 } 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74889 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74889' 00:08:04.294 killing process with pid 74889 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74889 00:08:04.294 [2024-11-18 23:03:23.471247] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.294 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74889 00:08:04.294 [2024-11-18 23:03:23.486481] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.555 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4Gl88Lt6wp 00:08:04.555 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:04.555 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:04.555 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:04.555 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:04.555 ************************************ 00:08:04.555 END TEST raid_write_error_test 00:08:04.555 ************************************ 00:08:04.555 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.555 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:04.555 23:03:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:04.555 00:08:04.555 real 0m3.206s 00:08:04.555 user 0m4.069s 00:08:04.555 sys 0m0.502s 00:08:04.555 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.555 23:03:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.555 23:03:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:04.556 23:03:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:04.556 23:03:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:04.556 23:03:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:04.556 23:03:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.556 23:03:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.556 ************************************ 00:08:04.556 START TEST raid_state_function_test 00:08:04.556 ************************************ 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75016 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75016' 00:08:04.556 Process raid pid: 75016 00:08:04.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75016 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75016 ']' 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.556 23:03:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.556 [2024-11-18 23:03:23.891722] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:04.556 [2024-11-18 23:03:23.891831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.815 [2024-11-18 23:03:24.054047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.815 [2024-11-18 23:03:24.100581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.815 [2024-11-18 23:03:24.143233] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.815 [2024-11-18 23:03:24.143287] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.384 [2024-11-18 23:03:24.712558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.384 [2024-11-18 23:03:24.712651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.384 [2024-11-18 23:03:24.712684] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.384 [2024-11-18 23:03:24.712693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.384 [2024-11-18 23:03:24.712699] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.384 [2024-11-18 23:03:24.712710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.384 23:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.642 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.642 "name": "Existed_Raid", 00:08:05.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.642 "strip_size_kb": 64, 00:08:05.642 "state": "configuring", 00:08:05.642 "raid_level": "raid0", 00:08:05.642 "superblock": false, 00:08:05.642 "num_base_bdevs": 3, 00:08:05.642 "num_base_bdevs_discovered": 0, 00:08:05.642 "num_base_bdevs_operational": 3, 00:08:05.642 "base_bdevs_list": [ 00:08:05.642 { 00:08:05.642 "name": "BaseBdev1", 00:08:05.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.642 "is_configured": false, 00:08:05.642 "data_offset": 0, 00:08:05.642 "data_size": 0 00:08:05.642 }, 00:08:05.642 { 00:08:05.642 "name": "BaseBdev2", 00:08:05.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.642 "is_configured": false, 00:08:05.642 "data_offset": 0, 00:08:05.642 "data_size": 0 00:08:05.642 }, 00:08:05.642 { 00:08:05.642 "name": "BaseBdev3", 00:08:05.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.642 "is_configured": false, 00:08:05.642 "data_offset": 0, 00:08:05.642 "data_size": 0 00:08:05.642 } 00:08:05.642 ] 00:08:05.642 }' 00:08:05.642 23:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.642 23:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.901 [2024-11-18 23:03:25.187632] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.901 [2024-11-18 23:03:25.187670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.901 [2024-11-18 23:03:25.199643] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.901 [2024-11-18 23:03:25.199719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.901 [2024-11-18 23:03:25.199732] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.901 [2024-11-18 23:03:25.199741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.901 [2024-11-18 23:03:25.199747] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.901 [2024-11-18 23:03:25.199755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.901 [2024-11-18 23:03:25.220421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.901 BaseBdev1 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.901 [ 00:08:05.901 { 00:08:05.901 "name": "BaseBdev1", 00:08:05.901 "aliases": [ 00:08:05.901 "a0490fa9-57a2-43e4-85ad-62eed4e66306" 00:08:05.901 ], 00:08:05.901 "product_name": "Malloc disk", 00:08:05.901 "block_size": 512, 00:08:05.901 "num_blocks": 65536, 00:08:05.901 "uuid": "a0490fa9-57a2-43e4-85ad-62eed4e66306", 00:08:05.901 "assigned_rate_limits": { 00:08:05.901 "rw_ios_per_sec": 0, 00:08:05.901 "rw_mbytes_per_sec": 0, 00:08:05.901 "r_mbytes_per_sec": 0, 00:08:05.901 "w_mbytes_per_sec": 0 00:08:05.901 }, 00:08:05.901 "claimed": true, 00:08:05.901 "claim_type": "exclusive_write", 00:08:05.901 "zoned": false, 00:08:05.901 "supported_io_types": { 00:08:05.901 "read": true, 00:08:05.901 "write": true, 00:08:05.901 "unmap": true, 00:08:05.901 "flush": true, 00:08:05.901 "reset": true, 00:08:05.901 "nvme_admin": false, 00:08:05.901 "nvme_io": false, 00:08:05.901 "nvme_io_md": false, 00:08:05.901 "write_zeroes": true, 00:08:05.901 "zcopy": true, 00:08:05.901 "get_zone_info": false, 00:08:05.901 "zone_management": false, 00:08:05.901 "zone_append": false, 00:08:05.901 "compare": false, 00:08:05.901 "compare_and_write": false, 00:08:05.901 "abort": true, 00:08:05.901 "seek_hole": false, 00:08:05.901 "seek_data": false, 00:08:05.901 "copy": true, 00:08:05.901 "nvme_iov_md": false 00:08:05.901 }, 00:08:05.901 "memory_domains": [ 00:08:05.901 { 00:08:05.901 "dma_device_id": "system", 00:08:05.901 "dma_device_type": 1 00:08:05.901 }, 00:08:05.901 { 00:08:05.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.901 "dma_device_type": 2 00:08:05.901 } 00:08:05.901 ], 00:08:05.901 "driver_specific": {} 00:08:05.901 } 00:08:05.901 ] 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.901 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.161 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.161 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.161 "name": "Existed_Raid", 00:08:06.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.161 "strip_size_kb": 64, 00:08:06.161 "state": "configuring", 00:08:06.161 "raid_level": "raid0", 00:08:06.161 "superblock": false, 00:08:06.161 "num_base_bdevs": 3, 00:08:06.161 "num_base_bdevs_discovered": 1, 00:08:06.161 "num_base_bdevs_operational": 3, 00:08:06.161 "base_bdevs_list": [ 00:08:06.161 { 00:08:06.161 "name": "BaseBdev1", 00:08:06.161 "uuid": "a0490fa9-57a2-43e4-85ad-62eed4e66306", 00:08:06.161 "is_configured": true, 00:08:06.161 "data_offset": 0, 00:08:06.161 "data_size": 65536 00:08:06.161 }, 00:08:06.161 { 00:08:06.161 "name": "BaseBdev2", 00:08:06.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.161 "is_configured": false, 00:08:06.161 "data_offset": 0, 00:08:06.161 "data_size": 0 00:08:06.161 }, 00:08:06.161 { 00:08:06.161 "name": "BaseBdev3", 00:08:06.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.161 "is_configured": false, 00:08:06.161 "data_offset": 0, 00:08:06.161 "data_size": 0 00:08:06.161 } 00:08:06.161 ] 00:08:06.161 }' 00:08:06.161 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.161 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.420 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.420 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.420 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.420 [2024-11-18 23:03:25.663706] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.420 [2024-11-18 23:03:25.663750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:06.420 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.420 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.421 [2024-11-18 23:03:25.671737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.421 [2024-11-18 23:03:25.673547] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.421 [2024-11-18 23:03:25.673582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.421 [2024-11-18 23:03:25.673591] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:06.421 [2024-11-18 23:03:25.673601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.421 "name": "Existed_Raid", 00:08:06.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.421 "strip_size_kb": 64, 00:08:06.421 "state": "configuring", 00:08:06.421 "raid_level": "raid0", 00:08:06.421 "superblock": false, 00:08:06.421 "num_base_bdevs": 3, 00:08:06.421 "num_base_bdevs_discovered": 1, 00:08:06.421 "num_base_bdevs_operational": 3, 00:08:06.421 "base_bdevs_list": [ 00:08:06.421 { 00:08:06.421 "name": "BaseBdev1", 00:08:06.421 "uuid": "a0490fa9-57a2-43e4-85ad-62eed4e66306", 00:08:06.421 "is_configured": true, 00:08:06.421 "data_offset": 0, 00:08:06.421 "data_size": 65536 00:08:06.421 }, 00:08:06.421 { 00:08:06.421 "name": "BaseBdev2", 00:08:06.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.421 "is_configured": false, 00:08:06.421 "data_offset": 0, 00:08:06.421 "data_size": 0 00:08:06.421 }, 00:08:06.421 { 00:08:06.421 "name": "BaseBdev3", 00:08:06.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.421 "is_configured": false, 00:08:06.421 "data_offset": 0, 00:08:06.421 "data_size": 0 00:08:06.421 } 00:08:06.421 ] 00:08:06.421 }' 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.421 23:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.990 [2024-11-18 23:03:26.126043] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.990 BaseBdev2 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.990 [ 00:08:06.990 { 00:08:06.990 "name": "BaseBdev2", 00:08:06.990 "aliases": [ 00:08:06.990 "2c919afe-0267-40c6-a033-381cdae62a1b" 00:08:06.990 ], 00:08:06.990 "product_name": "Malloc disk", 00:08:06.990 "block_size": 512, 00:08:06.990 "num_blocks": 65536, 00:08:06.990 "uuid": "2c919afe-0267-40c6-a033-381cdae62a1b", 00:08:06.990 "assigned_rate_limits": { 00:08:06.990 "rw_ios_per_sec": 0, 00:08:06.990 "rw_mbytes_per_sec": 0, 00:08:06.990 "r_mbytes_per_sec": 0, 00:08:06.990 "w_mbytes_per_sec": 0 00:08:06.990 }, 00:08:06.990 "claimed": true, 00:08:06.990 "claim_type": "exclusive_write", 00:08:06.990 "zoned": false, 00:08:06.990 "supported_io_types": { 00:08:06.990 "read": true, 00:08:06.990 "write": true, 00:08:06.990 "unmap": true, 00:08:06.990 "flush": true, 00:08:06.990 "reset": true, 00:08:06.990 "nvme_admin": false, 00:08:06.990 "nvme_io": false, 00:08:06.990 "nvme_io_md": false, 00:08:06.990 "write_zeroes": true, 00:08:06.990 "zcopy": true, 00:08:06.990 "get_zone_info": false, 00:08:06.990 "zone_management": false, 00:08:06.990 "zone_append": false, 00:08:06.990 "compare": false, 00:08:06.990 "compare_and_write": false, 00:08:06.990 "abort": true, 00:08:06.990 "seek_hole": false, 00:08:06.990 "seek_data": false, 00:08:06.990 "copy": true, 00:08:06.990 "nvme_iov_md": false 00:08:06.990 }, 00:08:06.990 "memory_domains": [ 00:08:06.990 { 00:08:06.990 "dma_device_id": "system", 00:08:06.990 "dma_device_type": 1 00:08:06.990 }, 00:08:06.990 { 00:08:06.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.990 "dma_device_type": 2 00:08:06.990 } 00:08:06.990 ], 00:08:06.990 "driver_specific": {} 00:08:06.990 } 00:08:06.990 ] 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.990 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.990 "name": "Existed_Raid", 00:08:06.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.990 "strip_size_kb": 64, 00:08:06.990 "state": "configuring", 00:08:06.990 "raid_level": "raid0", 00:08:06.990 "superblock": false, 00:08:06.990 "num_base_bdevs": 3, 00:08:06.990 "num_base_bdevs_discovered": 2, 00:08:06.990 "num_base_bdevs_operational": 3, 00:08:06.990 "base_bdevs_list": [ 00:08:06.990 { 00:08:06.990 "name": "BaseBdev1", 00:08:06.990 "uuid": "a0490fa9-57a2-43e4-85ad-62eed4e66306", 00:08:06.990 "is_configured": true, 00:08:06.990 "data_offset": 0, 00:08:06.990 "data_size": 65536 00:08:06.990 }, 00:08:06.990 { 00:08:06.990 "name": "BaseBdev2", 00:08:06.990 "uuid": "2c919afe-0267-40c6-a033-381cdae62a1b", 00:08:06.991 "is_configured": true, 00:08:06.991 "data_offset": 0, 00:08:06.991 "data_size": 65536 00:08:06.991 }, 00:08:06.991 { 00:08:06.991 "name": "BaseBdev3", 00:08:06.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.991 "is_configured": false, 00:08:06.991 "data_offset": 0, 00:08:06.991 "data_size": 0 00:08:06.991 } 00:08:06.991 ] 00:08:06.991 }' 00:08:06.991 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.991 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.250 [2024-11-18 23:03:26.612128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:07.250 [2024-11-18 23:03:26.612227] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:07.250 [2024-11-18 23:03:26.612257] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:07.250 [2024-11-18 23:03:26.612646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:07.250 BaseBdev3 00:08:07.250 [2024-11-18 23:03:26.612821] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:07.250 [2024-11-18 23:03:26.612837] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:07.250 [2024-11-18 23:03:26.613035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.250 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.510 [ 00:08:07.510 { 00:08:07.510 "name": "BaseBdev3", 00:08:07.510 "aliases": [ 00:08:07.510 "80268159-7f76-4eef-a22c-4f943f5b510d" 00:08:07.510 ], 00:08:07.510 "product_name": "Malloc disk", 00:08:07.510 "block_size": 512, 00:08:07.510 "num_blocks": 65536, 00:08:07.510 "uuid": "80268159-7f76-4eef-a22c-4f943f5b510d", 00:08:07.510 "assigned_rate_limits": { 00:08:07.510 "rw_ios_per_sec": 0, 00:08:07.510 "rw_mbytes_per_sec": 0, 00:08:07.510 "r_mbytes_per_sec": 0, 00:08:07.510 "w_mbytes_per_sec": 0 00:08:07.510 }, 00:08:07.510 "claimed": true, 00:08:07.510 "claim_type": "exclusive_write", 00:08:07.510 "zoned": false, 00:08:07.510 "supported_io_types": { 00:08:07.510 "read": true, 00:08:07.510 "write": true, 00:08:07.510 "unmap": true, 00:08:07.510 "flush": true, 00:08:07.510 "reset": true, 00:08:07.510 "nvme_admin": false, 00:08:07.510 "nvme_io": false, 00:08:07.510 "nvme_io_md": false, 00:08:07.510 "write_zeroes": true, 00:08:07.510 "zcopy": true, 00:08:07.510 "get_zone_info": false, 00:08:07.510 "zone_management": false, 00:08:07.510 "zone_append": false, 00:08:07.510 "compare": false, 00:08:07.510 "compare_and_write": false, 00:08:07.510 "abort": true, 00:08:07.510 "seek_hole": false, 00:08:07.510 "seek_data": false, 00:08:07.510 "copy": true, 00:08:07.510 "nvme_iov_md": false 00:08:07.510 }, 00:08:07.510 "memory_domains": [ 00:08:07.510 { 00:08:07.510 "dma_device_id": "system", 00:08:07.510 "dma_device_type": 1 00:08:07.510 }, 00:08:07.510 { 00:08:07.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.510 "dma_device_type": 2 00:08:07.510 } 00:08:07.510 ], 00:08:07.510 "driver_specific": {} 00:08:07.510 } 00:08:07.510 ] 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.510 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.510 "name": "Existed_Raid", 00:08:07.510 "uuid": "c2fad5c1-3adb-4af9-9849-de30a39ced4e", 00:08:07.510 "strip_size_kb": 64, 00:08:07.510 "state": "online", 00:08:07.510 "raid_level": "raid0", 00:08:07.510 "superblock": false, 00:08:07.510 "num_base_bdevs": 3, 00:08:07.510 "num_base_bdevs_discovered": 3, 00:08:07.510 "num_base_bdevs_operational": 3, 00:08:07.510 "base_bdevs_list": [ 00:08:07.510 { 00:08:07.510 "name": "BaseBdev1", 00:08:07.510 "uuid": "a0490fa9-57a2-43e4-85ad-62eed4e66306", 00:08:07.510 "is_configured": true, 00:08:07.510 "data_offset": 0, 00:08:07.510 "data_size": 65536 00:08:07.510 }, 00:08:07.510 { 00:08:07.510 "name": "BaseBdev2", 00:08:07.511 "uuid": "2c919afe-0267-40c6-a033-381cdae62a1b", 00:08:07.511 "is_configured": true, 00:08:07.511 "data_offset": 0, 00:08:07.511 "data_size": 65536 00:08:07.511 }, 00:08:07.511 { 00:08:07.511 "name": "BaseBdev3", 00:08:07.511 "uuid": "80268159-7f76-4eef-a22c-4f943f5b510d", 00:08:07.511 "is_configured": true, 00:08:07.511 "data_offset": 0, 00:08:07.511 "data_size": 65536 00:08:07.511 } 00:08:07.511 ] 00:08:07.511 }' 00:08:07.511 23:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.511 23:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.770 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:07.770 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:07.770 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.770 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.770 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.770 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.770 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:07.770 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.770 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.770 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.770 [2024-11-18 23:03:27.119558] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.770 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.030 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.030 "name": "Existed_Raid", 00:08:08.030 "aliases": [ 00:08:08.030 "c2fad5c1-3adb-4af9-9849-de30a39ced4e" 00:08:08.030 ], 00:08:08.030 "product_name": "Raid Volume", 00:08:08.030 "block_size": 512, 00:08:08.030 "num_blocks": 196608, 00:08:08.030 "uuid": "c2fad5c1-3adb-4af9-9849-de30a39ced4e", 00:08:08.030 "assigned_rate_limits": { 00:08:08.030 "rw_ios_per_sec": 0, 00:08:08.030 "rw_mbytes_per_sec": 0, 00:08:08.030 "r_mbytes_per_sec": 0, 00:08:08.030 "w_mbytes_per_sec": 0 00:08:08.030 }, 00:08:08.030 "claimed": false, 00:08:08.030 "zoned": false, 00:08:08.030 "supported_io_types": { 00:08:08.030 "read": true, 00:08:08.030 "write": true, 00:08:08.030 "unmap": true, 00:08:08.030 "flush": true, 00:08:08.030 "reset": true, 00:08:08.030 "nvme_admin": false, 00:08:08.030 "nvme_io": false, 00:08:08.031 "nvme_io_md": false, 00:08:08.031 "write_zeroes": true, 00:08:08.031 "zcopy": false, 00:08:08.031 "get_zone_info": false, 00:08:08.031 "zone_management": false, 00:08:08.031 "zone_append": false, 00:08:08.031 "compare": false, 00:08:08.031 "compare_and_write": false, 00:08:08.031 "abort": false, 00:08:08.031 "seek_hole": false, 00:08:08.031 "seek_data": false, 00:08:08.031 "copy": false, 00:08:08.031 "nvme_iov_md": false 00:08:08.031 }, 00:08:08.031 "memory_domains": [ 00:08:08.031 { 00:08:08.031 "dma_device_id": "system", 00:08:08.031 "dma_device_type": 1 00:08:08.031 }, 00:08:08.031 { 00:08:08.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.031 "dma_device_type": 2 00:08:08.031 }, 00:08:08.031 { 00:08:08.031 "dma_device_id": "system", 00:08:08.031 "dma_device_type": 1 00:08:08.031 }, 00:08:08.031 { 00:08:08.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.031 "dma_device_type": 2 00:08:08.031 }, 00:08:08.031 { 00:08:08.031 "dma_device_id": "system", 00:08:08.031 "dma_device_type": 1 00:08:08.031 }, 00:08:08.031 { 00:08:08.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.031 "dma_device_type": 2 00:08:08.031 } 00:08:08.031 ], 00:08:08.031 "driver_specific": { 00:08:08.031 "raid": { 00:08:08.031 "uuid": "c2fad5c1-3adb-4af9-9849-de30a39ced4e", 00:08:08.031 "strip_size_kb": 64, 00:08:08.031 "state": "online", 00:08:08.031 "raid_level": "raid0", 00:08:08.031 "superblock": false, 00:08:08.031 "num_base_bdevs": 3, 00:08:08.031 "num_base_bdevs_discovered": 3, 00:08:08.031 "num_base_bdevs_operational": 3, 00:08:08.031 "base_bdevs_list": [ 00:08:08.031 { 00:08:08.031 "name": "BaseBdev1", 00:08:08.031 "uuid": "a0490fa9-57a2-43e4-85ad-62eed4e66306", 00:08:08.031 "is_configured": true, 00:08:08.031 "data_offset": 0, 00:08:08.031 "data_size": 65536 00:08:08.031 }, 00:08:08.031 { 00:08:08.031 "name": "BaseBdev2", 00:08:08.031 "uuid": "2c919afe-0267-40c6-a033-381cdae62a1b", 00:08:08.031 "is_configured": true, 00:08:08.031 "data_offset": 0, 00:08:08.031 "data_size": 65536 00:08:08.031 }, 00:08:08.031 { 00:08:08.031 "name": "BaseBdev3", 00:08:08.031 "uuid": "80268159-7f76-4eef-a22c-4f943f5b510d", 00:08:08.031 "is_configured": true, 00:08:08.031 "data_offset": 0, 00:08:08.031 "data_size": 65536 00:08:08.031 } 00:08:08.031 ] 00:08:08.031 } 00:08:08.031 } 00:08:08.031 }' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:08.031 BaseBdev2 00:08:08.031 BaseBdev3' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.031 [2024-11-18 23:03:27.362916] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:08.031 [2024-11-18 23:03:27.362981] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.031 [2024-11-18 23:03:27.363041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.031 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.290 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.290 "name": "Existed_Raid", 00:08:08.290 "uuid": "c2fad5c1-3adb-4af9-9849-de30a39ced4e", 00:08:08.290 "strip_size_kb": 64, 00:08:08.290 "state": "offline", 00:08:08.290 "raid_level": "raid0", 00:08:08.290 "superblock": false, 00:08:08.290 "num_base_bdevs": 3, 00:08:08.290 "num_base_bdevs_discovered": 2, 00:08:08.290 "num_base_bdevs_operational": 2, 00:08:08.290 "base_bdevs_list": [ 00:08:08.290 { 00:08:08.290 "name": null, 00:08:08.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.290 "is_configured": false, 00:08:08.290 "data_offset": 0, 00:08:08.290 "data_size": 65536 00:08:08.290 }, 00:08:08.290 { 00:08:08.290 "name": "BaseBdev2", 00:08:08.290 "uuid": "2c919afe-0267-40c6-a033-381cdae62a1b", 00:08:08.290 "is_configured": true, 00:08:08.290 "data_offset": 0, 00:08:08.290 "data_size": 65536 00:08:08.290 }, 00:08:08.291 { 00:08:08.291 "name": "BaseBdev3", 00:08:08.291 "uuid": "80268159-7f76-4eef-a22c-4f943f5b510d", 00:08:08.291 "is_configured": true, 00:08:08.291 "data_offset": 0, 00:08:08.291 "data_size": 65536 00:08:08.291 } 00:08:08.291 ] 00:08:08.291 }' 00:08:08.291 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.291 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.551 [2024-11-18 23:03:27.793434] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.551 [2024-11-18 23:03:27.860526] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:08.551 [2024-11-18 23:03:27.860569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.551 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.812 BaseBdev2 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.812 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.812 [ 00:08:08.812 { 00:08:08.812 "name": "BaseBdev2", 00:08:08.812 "aliases": [ 00:08:08.812 "940ad977-5451-4f72-a6ad-0a1df17bfb04" 00:08:08.812 ], 00:08:08.812 "product_name": "Malloc disk", 00:08:08.812 "block_size": 512, 00:08:08.812 "num_blocks": 65536, 00:08:08.812 "uuid": "940ad977-5451-4f72-a6ad-0a1df17bfb04", 00:08:08.812 "assigned_rate_limits": { 00:08:08.812 "rw_ios_per_sec": 0, 00:08:08.812 "rw_mbytes_per_sec": 0, 00:08:08.812 "r_mbytes_per_sec": 0, 00:08:08.812 "w_mbytes_per_sec": 0 00:08:08.812 }, 00:08:08.812 "claimed": false, 00:08:08.812 "zoned": false, 00:08:08.812 "supported_io_types": { 00:08:08.812 "read": true, 00:08:08.812 "write": true, 00:08:08.812 "unmap": true, 00:08:08.812 "flush": true, 00:08:08.812 "reset": true, 00:08:08.812 "nvme_admin": false, 00:08:08.812 "nvme_io": false, 00:08:08.812 "nvme_io_md": false, 00:08:08.812 "write_zeroes": true, 00:08:08.812 "zcopy": true, 00:08:08.812 "get_zone_info": false, 00:08:08.812 "zone_management": false, 00:08:08.812 "zone_append": false, 00:08:08.812 "compare": false, 00:08:08.812 "compare_and_write": false, 00:08:08.812 "abort": true, 00:08:08.812 "seek_hole": false, 00:08:08.812 "seek_data": false, 00:08:08.812 "copy": true, 00:08:08.812 "nvme_iov_md": false 00:08:08.812 }, 00:08:08.812 "memory_domains": [ 00:08:08.812 { 00:08:08.813 "dma_device_id": "system", 00:08:08.813 "dma_device_type": 1 00:08:08.813 }, 00:08:08.813 { 00:08:08.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.813 "dma_device_type": 2 00:08:08.813 } 00:08:08.813 ], 00:08:08.813 "driver_specific": {} 00:08:08.813 } 00:08:08.813 ] 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.813 BaseBdev3 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.813 23:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.813 [ 00:08:08.813 { 00:08:08.813 "name": "BaseBdev3", 00:08:08.813 "aliases": [ 00:08:08.813 "949a8ec5-f5fc-4df5-97a6-aa63f4d61c5f" 00:08:08.813 ], 00:08:08.813 "product_name": "Malloc disk", 00:08:08.813 "block_size": 512, 00:08:08.813 "num_blocks": 65536, 00:08:08.813 "uuid": "949a8ec5-f5fc-4df5-97a6-aa63f4d61c5f", 00:08:08.813 "assigned_rate_limits": { 00:08:08.813 "rw_ios_per_sec": 0, 00:08:08.813 "rw_mbytes_per_sec": 0, 00:08:08.813 "r_mbytes_per_sec": 0, 00:08:08.813 "w_mbytes_per_sec": 0 00:08:08.813 }, 00:08:08.813 "claimed": false, 00:08:08.813 "zoned": false, 00:08:08.813 "supported_io_types": { 00:08:08.813 "read": true, 00:08:08.813 "write": true, 00:08:08.813 "unmap": true, 00:08:08.813 "flush": true, 00:08:08.813 "reset": true, 00:08:08.813 "nvme_admin": false, 00:08:08.813 "nvme_io": false, 00:08:08.813 "nvme_io_md": false, 00:08:08.813 "write_zeroes": true, 00:08:08.813 "zcopy": true, 00:08:08.813 "get_zone_info": false, 00:08:08.813 "zone_management": false, 00:08:08.813 "zone_append": false, 00:08:08.813 "compare": false, 00:08:08.813 "compare_and_write": false, 00:08:08.813 "abort": true, 00:08:08.813 "seek_hole": false, 00:08:08.813 "seek_data": false, 00:08:08.813 "copy": true, 00:08:08.813 "nvme_iov_md": false 00:08:08.813 }, 00:08:08.813 "memory_domains": [ 00:08:08.813 { 00:08:08.813 "dma_device_id": "system", 00:08:08.813 "dma_device_type": 1 00:08:08.813 }, 00:08:08.813 { 00:08:08.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.813 "dma_device_type": 2 00:08:08.813 } 00:08:08.813 ], 00:08:08.813 "driver_specific": {} 00:08:08.813 } 00:08:08.813 ] 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.813 [2024-11-18 23:03:28.035141] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.813 [2024-11-18 23:03:28.035232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.813 [2024-11-18 23:03:28.035291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:08.813 [2024-11-18 23:03:28.037136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.813 "name": "Existed_Raid", 00:08:08.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.813 "strip_size_kb": 64, 00:08:08.813 "state": "configuring", 00:08:08.813 "raid_level": "raid0", 00:08:08.813 "superblock": false, 00:08:08.813 "num_base_bdevs": 3, 00:08:08.813 "num_base_bdevs_discovered": 2, 00:08:08.813 "num_base_bdevs_operational": 3, 00:08:08.813 "base_bdevs_list": [ 00:08:08.813 { 00:08:08.813 "name": "BaseBdev1", 00:08:08.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.813 "is_configured": false, 00:08:08.813 "data_offset": 0, 00:08:08.813 "data_size": 0 00:08:08.813 }, 00:08:08.813 { 00:08:08.813 "name": "BaseBdev2", 00:08:08.813 "uuid": "940ad977-5451-4f72-a6ad-0a1df17bfb04", 00:08:08.813 "is_configured": true, 00:08:08.813 "data_offset": 0, 00:08:08.813 "data_size": 65536 00:08:08.813 }, 00:08:08.813 { 00:08:08.813 "name": "BaseBdev3", 00:08:08.813 "uuid": "949a8ec5-f5fc-4df5-97a6-aa63f4d61c5f", 00:08:08.813 "is_configured": true, 00:08:08.813 "data_offset": 0, 00:08:08.813 "data_size": 65536 00:08:08.813 } 00:08:08.813 ] 00:08:08.813 }' 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.813 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.073 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:09.073 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.073 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.333 [2024-11-18 23:03:28.450421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.333 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.334 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.334 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.334 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.334 "name": "Existed_Raid", 00:08:09.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.334 "strip_size_kb": 64, 00:08:09.334 "state": "configuring", 00:08:09.334 "raid_level": "raid0", 00:08:09.334 "superblock": false, 00:08:09.334 "num_base_bdevs": 3, 00:08:09.334 "num_base_bdevs_discovered": 1, 00:08:09.334 "num_base_bdevs_operational": 3, 00:08:09.334 "base_bdevs_list": [ 00:08:09.334 { 00:08:09.334 "name": "BaseBdev1", 00:08:09.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.334 "is_configured": false, 00:08:09.334 "data_offset": 0, 00:08:09.334 "data_size": 0 00:08:09.334 }, 00:08:09.334 { 00:08:09.334 "name": null, 00:08:09.334 "uuid": "940ad977-5451-4f72-a6ad-0a1df17bfb04", 00:08:09.334 "is_configured": false, 00:08:09.334 "data_offset": 0, 00:08:09.334 "data_size": 65536 00:08:09.334 }, 00:08:09.334 { 00:08:09.334 "name": "BaseBdev3", 00:08:09.334 "uuid": "949a8ec5-f5fc-4df5-97a6-aa63f4d61c5f", 00:08:09.334 "is_configured": true, 00:08:09.334 "data_offset": 0, 00:08:09.334 "data_size": 65536 00:08:09.334 } 00:08:09.334 ] 00:08:09.334 }' 00:08:09.334 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.334 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.599 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:09.599 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.599 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.599 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.599 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.600 [2024-11-18 23:03:28.936502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.600 BaseBdev1 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.600 [ 00:08:09.600 { 00:08:09.600 "name": "BaseBdev1", 00:08:09.600 "aliases": [ 00:08:09.600 "08ed7d3d-2be7-4008-9b7c-23cb9ae5db57" 00:08:09.600 ], 00:08:09.600 "product_name": "Malloc disk", 00:08:09.600 "block_size": 512, 00:08:09.600 "num_blocks": 65536, 00:08:09.600 "uuid": "08ed7d3d-2be7-4008-9b7c-23cb9ae5db57", 00:08:09.600 "assigned_rate_limits": { 00:08:09.600 "rw_ios_per_sec": 0, 00:08:09.600 "rw_mbytes_per_sec": 0, 00:08:09.600 "r_mbytes_per_sec": 0, 00:08:09.600 "w_mbytes_per_sec": 0 00:08:09.600 }, 00:08:09.600 "claimed": true, 00:08:09.600 "claim_type": "exclusive_write", 00:08:09.600 "zoned": false, 00:08:09.600 "supported_io_types": { 00:08:09.600 "read": true, 00:08:09.600 "write": true, 00:08:09.600 "unmap": true, 00:08:09.600 "flush": true, 00:08:09.600 "reset": true, 00:08:09.600 "nvme_admin": false, 00:08:09.600 "nvme_io": false, 00:08:09.600 "nvme_io_md": false, 00:08:09.600 "write_zeroes": true, 00:08:09.600 "zcopy": true, 00:08:09.600 "get_zone_info": false, 00:08:09.600 "zone_management": false, 00:08:09.600 "zone_append": false, 00:08:09.600 "compare": false, 00:08:09.600 "compare_and_write": false, 00:08:09.600 "abort": true, 00:08:09.600 "seek_hole": false, 00:08:09.600 "seek_data": false, 00:08:09.600 "copy": true, 00:08:09.600 "nvme_iov_md": false 00:08:09.600 }, 00:08:09.600 "memory_domains": [ 00:08:09.600 { 00:08:09.600 "dma_device_id": "system", 00:08:09.600 "dma_device_type": 1 00:08:09.600 }, 00:08:09.600 { 00:08:09.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.600 "dma_device_type": 2 00:08:09.600 } 00:08:09.600 ], 00:08:09.600 "driver_specific": {} 00:08:09.600 } 00:08:09.600 ] 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.600 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.870 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.870 23:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.870 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.870 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.870 23:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.870 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.870 "name": "Existed_Raid", 00:08:09.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.870 "strip_size_kb": 64, 00:08:09.870 "state": "configuring", 00:08:09.870 "raid_level": "raid0", 00:08:09.870 "superblock": false, 00:08:09.870 "num_base_bdevs": 3, 00:08:09.870 "num_base_bdevs_discovered": 2, 00:08:09.870 "num_base_bdevs_operational": 3, 00:08:09.870 "base_bdevs_list": [ 00:08:09.870 { 00:08:09.870 "name": "BaseBdev1", 00:08:09.870 "uuid": "08ed7d3d-2be7-4008-9b7c-23cb9ae5db57", 00:08:09.870 "is_configured": true, 00:08:09.870 "data_offset": 0, 00:08:09.870 "data_size": 65536 00:08:09.870 }, 00:08:09.870 { 00:08:09.870 "name": null, 00:08:09.870 "uuid": "940ad977-5451-4f72-a6ad-0a1df17bfb04", 00:08:09.870 "is_configured": false, 00:08:09.870 "data_offset": 0, 00:08:09.870 "data_size": 65536 00:08:09.870 }, 00:08:09.870 { 00:08:09.870 "name": "BaseBdev3", 00:08:09.870 "uuid": "949a8ec5-f5fc-4df5-97a6-aa63f4d61c5f", 00:08:09.870 "is_configured": true, 00:08:09.870 "data_offset": 0, 00:08:09.870 "data_size": 65536 00:08:09.870 } 00:08:09.870 ] 00:08:09.870 }' 00:08:09.870 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.870 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 [2024-11-18 23:03:29.459650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.142 "name": "Existed_Raid", 00:08:10.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.142 "strip_size_kb": 64, 00:08:10.142 "state": "configuring", 00:08:10.142 "raid_level": "raid0", 00:08:10.142 "superblock": false, 00:08:10.142 "num_base_bdevs": 3, 00:08:10.142 "num_base_bdevs_discovered": 1, 00:08:10.142 "num_base_bdevs_operational": 3, 00:08:10.142 "base_bdevs_list": [ 00:08:10.142 { 00:08:10.142 "name": "BaseBdev1", 00:08:10.142 "uuid": "08ed7d3d-2be7-4008-9b7c-23cb9ae5db57", 00:08:10.142 "is_configured": true, 00:08:10.142 "data_offset": 0, 00:08:10.142 "data_size": 65536 00:08:10.142 }, 00:08:10.142 { 00:08:10.142 "name": null, 00:08:10.142 "uuid": "940ad977-5451-4f72-a6ad-0a1df17bfb04", 00:08:10.142 "is_configured": false, 00:08:10.142 "data_offset": 0, 00:08:10.142 "data_size": 65536 00:08:10.142 }, 00:08:10.142 { 00:08:10.142 "name": null, 00:08:10.142 "uuid": "949a8ec5-f5fc-4df5-97a6-aa63f4d61c5f", 00:08:10.142 "is_configured": false, 00:08:10.142 "data_offset": 0, 00:08:10.142 "data_size": 65536 00:08:10.142 } 00:08:10.142 ] 00:08:10.142 }' 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.142 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.710 [2024-11-18 23:03:29.994909] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.710 23:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.711 "name": "Existed_Raid", 00:08:10.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.711 "strip_size_kb": 64, 00:08:10.711 "state": "configuring", 00:08:10.711 "raid_level": "raid0", 00:08:10.711 "superblock": false, 00:08:10.711 "num_base_bdevs": 3, 00:08:10.711 "num_base_bdevs_discovered": 2, 00:08:10.711 "num_base_bdevs_operational": 3, 00:08:10.711 "base_bdevs_list": [ 00:08:10.711 { 00:08:10.711 "name": "BaseBdev1", 00:08:10.711 "uuid": "08ed7d3d-2be7-4008-9b7c-23cb9ae5db57", 00:08:10.711 "is_configured": true, 00:08:10.711 "data_offset": 0, 00:08:10.711 "data_size": 65536 00:08:10.711 }, 00:08:10.711 { 00:08:10.711 "name": null, 00:08:10.711 "uuid": "940ad977-5451-4f72-a6ad-0a1df17bfb04", 00:08:10.711 "is_configured": false, 00:08:10.711 "data_offset": 0, 00:08:10.711 "data_size": 65536 00:08:10.711 }, 00:08:10.711 { 00:08:10.711 "name": "BaseBdev3", 00:08:10.711 "uuid": "949a8ec5-f5fc-4df5-97a6-aa63f4d61c5f", 00:08:10.711 "is_configured": true, 00:08:10.711 "data_offset": 0, 00:08:10.711 "data_size": 65536 00:08:10.711 } 00:08:10.711 ] 00:08:10.711 }' 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.711 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.279 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:11.279 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.280 [2024-11-18 23:03:30.498051] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.280 "name": "Existed_Raid", 00:08:11.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.280 "strip_size_kb": 64, 00:08:11.280 "state": "configuring", 00:08:11.280 "raid_level": "raid0", 00:08:11.280 "superblock": false, 00:08:11.280 "num_base_bdevs": 3, 00:08:11.280 "num_base_bdevs_discovered": 1, 00:08:11.280 "num_base_bdevs_operational": 3, 00:08:11.280 "base_bdevs_list": [ 00:08:11.280 { 00:08:11.280 "name": null, 00:08:11.280 "uuid": "08ed7d3d-2be7-4008-9b7c-23cb9ae5db57", 00:08:11.280 "is_configured": false, 00:08:11.280 "data_offset": 0, 00:08:11.280 "data_size": 65536 00:08:11.280 }, 00:08:11.280 { 00:08:11.280 "name": null, 00:08:11.280 "uuid": "940ad977-5451-4f72-a6ad-0a1df17bfb04", 00:08:11.280 "is_configured": false, 00:08:11.280 "data_offset": 0, 00:08:11.280 "data_size": 65536 00:08:11.280 }, 00:08:11.280 { 00:08:11.280 "name": "BaseBdev3", 00:08:11.280 "uuid": "949a8ec5-f5fc-4df5-97a6-aa63f4d61c5f", 00:08:11.280 "is_configured": true, 00:08:11.280 "data_offset": 0, 00:08:11.280 "data_size": 65536 00:08:11.280 } 00:08:11.280 ] 00:08:11.280 }' 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.280 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.539 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.539 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:11.540 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.540 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.799 [2024-11-18 23:03:30.955688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.799 23:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.799 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.799 "name": "Existed_Raid", 00:08:11.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.799 "strip_size_kb": 64, 00:08:11.799 "state": "configuring", 00:08:11.799 "raid_level": "raid0", 00:08:11.799 "superblock": false, 00:08:11.799 "num_base_bdevs": 3, 00:08:11.799 "num_base_bdevs_discovered": 2, 00:08:11.799 "num_base_bdevs_operational": 3, 00:08:11.799 "base_bdevs_list": [ 00:08:11.799 { 00:08:11.799 "name": null, 00:08:11.799 "uuid": "08ed7d3d-2be7-4008-9b7c-23cb9ae5db57", 00:08:11.799 "is_configured": false, 00:08:11.799 "data_offset": 0, 00:08:11.799 "data_size": 65536 00:08:11.799 }, 00:08:11.799 { 00:08:11.799 "name": "BaseBdev2", 00:08:11.799 "uuid": "940ad977-5451-4f72-a6ad-0a1df17bfb04", 00:08:11.799 "is_configured": true, 00:08:11.799 "data_offset": 0, 00:08:11.799 "data_size": 65536 00:08:11.799 }, 00:08:11.799 { 00:08:11.799 "name": "BaseBdev3", 00:08:11.799 "uuid": "949a8ec5-f5fc-4df5-97a6-aa63f4d61c5f", 00:08:11.799 "is_configured": true, 00:08:11.799 "data_offset": 0, 00:08:11.799 "data_size": 65536 00:08:11.799 } 00:08:11.799 ] 00:08:11.799 }' 00:08:11.799 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.799 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.059 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.059 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:12.059 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.059 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.059 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.059 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 08ed7d3d-2be7-4008-9b7c-23cb9ae5db57 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.320 NewBaseBdev 00:08:12.320 [2024-11-18 23:03:31.497756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:12.320 [2024-11-18 23:03:31.497796] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:12.320 [2024-11-18 23:03:31.497805] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:12.320 [2024-11-18 23:03:31.498061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:12.320 [2024-11-18 23:03:31.498176] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:12.320 [2024-11-18 23:03:31.498185] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:12.320 [2024-11-18 23:03:31.498380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:12.320 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.321 [ 00:08:12.321 { 00:08:12.321 "name": "NewBaseBdev", 00:08:12.321 "aliases": [ 00:08:12.321 "08ed7d3d-2be7-4008-9b7c-23cb9ae5db57" 00:08:12.321 ], 00:08:12.321 "product_name": "Malloc disk", 00:08:12.321 "block_size": 512, 00:08:12.321 "num_blocks": 65536, 00:08:12.321 "uuid": "08ed7d3d-2be7-4008-9b7c-23cb9ae5db57", 00:08:12.321 "assigned_rate_limits": { 00:08:12.321 "rw_ios_per_sec": 0, 00:08:12.321 "rw_mbytes_per_sec": 0, 00:08:12.321 "r_mbytes_per_sec": 0, 00:08:12.321 "w_mbytes_per_sec": 0 00:08:12.321 }, 00:08:12.321 "claimed": true, 00:08:12.321 "claim_type": "exclusive_write", 00:08:12.321 "zoned": false, 00:08:12.321 "supported_io_types": { 00:08:12.321 "read": true, 00:08:12.321 "write": true, 00:08:12.321 "unmap": true, 00:08:12.321 "flush": true, 00:08:12.321 "reset": true, 00:08:12.321 "nvme_admin": false, 00:08:12.321 "nvme_io": false, 00:08:12.321 "nvme_io_md": false, 00:08:12.321 "write_zeroes": true, 00:08:12.321 "zcopy": true, 00:08:12.321 "get_zone_info": false, 00:08:12.321 "zone_management": false, 00:08:12.321 "zone_append": false, 00:08:12.321 "compare": false, 00:08:12.321 "compare_and_write": false, 00:08:12.321 "abort": true, 00:08:12.321 "seek_hole": false, 00:08:12.321 "seek_data": false, 00:08:12.321 "copy": true, 00:08:12.321 "nvme_iov_md": false 00:08:12.321 }, 00:08:12.321 "memory_domains": [ 00:08:12.321 { 00:08:12.321 "dma_device_id": "system", 00:08:12.321 "dma_device_type": 1 00:08:12.321 }, 00:08:12.321 { 00:08:12.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.321 "dma_device_type": 2 00:08:12.321 } 00:08:12.321 ], 00:08:12.321 "driver_specific": {} 00:08:12.321 } 00:08:12.321 ] 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.321 "name": "Existed_Raid", 00:08:12.321 "uuid": "04bbf128-26fc-475b-9d57-7764de23c35e", 00:08:12.321 "strip_size_kb": 64, 00:08:12.321 "state": "online", 00:08:12.321 "raid_level": "raid0", 00:08:12.321 "superblock": false, 00:08:12.321 "num_base_bdevs": 3, 00:08:12.321 "num_base_bdevs_discovered": 3, 00:08:12.321 "num_base_bdevs_operational": 3, 00:08:12.321 "base_bdevs_list": [ 00:08:12.321 { 00:08:12.321 "name": "NewBaseBdev", 00:08:12.321 "uuid": "08ed7d3d-2be7-4008-9b7c-23cb9ae5db57", 00:08:12.321 "is_configured": true, 00:08:12.321 "data_offset": 0, 00:08:12.321 "data_size": 65536 00:08:12.321 }, 00:08:12.321 { 00:08:12.321 "name": "BaseBdev2", 00:08:12.321 "uuid": "940ad977-5451-4f72-a6ad-0a1df17bfb04", 00:08:12.321 "is_configured": true, 00:08:12.321 "data_offset": 0, 00:08:12.321 "data_size": 65536 00:08:12.321 }, 00:08:12.321 { 00:08:12.321 "name": "BaseBdev3", 00:08:12.321 "uuid": "949a8ec5-f5fc-4df5-97a6-aa63f4d61c5f", 00:08:12.321 "is_configured": true, 00:08:12.321 "data_offset": 0, 00:08:12.321 "data_size": 65536 00:08:12.321 } 00:08:12.321 ] 00:08:12.321 }' 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.321 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.904 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:12.904 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:12.905 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.905 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.905 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.905 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.905 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:12.905 23:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.905 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.905 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.905 [2024-11-18 23:03:31.973220] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.905 23:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.905 "name": "Existed_Raid", 00:08:12.905 "aliases": [ 00:08:12.905 "04bbf128-26fc-475b-9d57-7764de23c35e" 00:08:12.905 ], 00:08:12.905 "product_name": "Raid Volume", 00:08:12.905 "block_size": 512, 00:08:12.905 "num_blocks": 196608, 00:08:12.905 "uuid": "04bbf128-26fc-475b-9d57-7764de23c35e", 00:08:12.905 "assigned_rate_limits": { 00:08:12.905 "rw_ios_per_sec": 0, 00:08:12.905 "rw_mbytes_per_sec": 0, 00:08:12.905 "r_mbytes_per_sec": 0, 00:08:12.905 "w_mbytes_per_sec": 0 00:08:12.905 }, 00:08:12.905 "claimed": false, 00:08:12.905 "zoned": false, 00:08:12.905 "supported_io_types": { 00:08:12.905 "read": true, 00:08:12.905 "write": true, 00:08:12.905 "unmap": true, 00:08:12.905 "flush": true, 00:08:12.905 "reset": true, 00:08:12.905 "nvme_admin": false, 00:08:12.905 "nvme_io": false, 00:08:12.905 "nvme_io_md": false, 00:08:12.905 "write_zeroes": true, 00:08:12.905 "zcopy": false, 00:08:12.905 "get_zone_info": false, 00:08:12.905 "zone_management": false, 00:08:12.905 "zone_append": false, 00:08:12.905 "compare": false, 00:08:12.905 "compare_and_write": false, 00:08:12.905 "abort": false, 00:08:12.905 "seek_hole": false, 00:08:12.905 "seek_data": false, 00:08:12.905 "copy": false, 00:08:12.905 "nvme_iov_md": false 00:08:12.905 }, 00:08:12.905 "memory_domains": [ 00:08:12.905 { 00:08:12.905 "dma_device_id": "system", 00:08:12.905 "dma_device_type": 1 00:08:12.905 }, 00:08:12.905 { 00:08:12.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.905 "dma_device_type": 2 00:08:12.905 }, 00:08:12.905 { 00:08:12.905 "dma_device_id": "system", 00:08:12.905 "dma_device_type": 1 00:08:12.905 }, 00:08:12.905 { 00:08:12.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.905 "dma_device_type": 2 00:08:12.905 }, 00:08:12.905 { 00:08:12.905 "dma_device_id": "system", 00:08:12.905 "dma_device_type": 1 00:08:12.905 }, 00:08:12.905 { 00:08:12.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.905 "dma_device_type": 2 00:08:12.905 } 00:08:12.905 ], 00:08:12.905 "driver_specific": { 00:08:12.905 "raid": { 00:08:12.905 "uuid": "04bbf128-26fc-475b-9d57-7764de23c35e", 00:08:12.905 "strip_size_kb": 64, 00:08:12.905 "state": "online", 00:08:12.905 "raid_level": "raid0", 00:08:12.905 "superblock": false, 00:08:12.905 "num_base_bdevs": 3, 00:08:12.905 "num_base_bdevs_discovered": 3, 00:08:12.905 "num_base_bdevs_operational": 3, 00:08:12.905 "base_bdevs_list": [ 00:08:12.905 { 00:08:12.905 "name": "NewBaseBdev", 00:08:12.905 "uuid": "08ed7d3d-2be7-4008-9b7c-23cb9ae5db57", 00:08:12.905 "is_configured": true, 00:08:12.905 "data_offset": 0, 00:08:12.905 "data_size": 65536 00:08:12.905 }, 00:08:12.905 { 00:08:12.905 "name": "BaseBdev2", 00:08:12.905 "uuid": "940ad977-5451-4f72-a6ad-0a1df17bfb04", 00:08:12.905 "is_configured": true, 00:08:12.905 "data_offset": 0, 00:08:12.905 "data_size": 65536 00:08:12.905 }, 00:08:12.905 { 00:08:12.905 "name": "BaseBdev3", 00:08:12.905 "uuid": "949a8ec5-f5fc-4df5-97a6-aa63f4d61c5f", 00:08:12.905 "is_configured": true, 00:08:12.905 "data_offset": 0, 00:08:12.905 "data_size": 65536 00:08:12.905 } 00:08:12.905 ] 00:08:12.905 } 00:08:12.905 } 00:08:12.905 }' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:12.905 BaseBdev2 00:08:12.905 BaseBdev3' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.905 [2024-11-18 23:03:32.256469] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.905 [2024-11-18 23:03:32.256493] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.905 [2024-11-18 23:03:32.256560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.905 [2024-11-18 23:03:32.256611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.905 [2024-11-18 23:03:32.256628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75016 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75016 ']' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75016 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.905 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75016 00:08:13.165 killing process with pid 75016 00:08:13.165 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.165 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.165 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75016' 00:08:13.165 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75016 00:08:13.165 [2024-11-18 23:03:32.303021] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.165 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75016 00:08:13.165 [2024-11-18 23:03:32.334378] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.426 ************************************ 00:08:13.426 END TEST raid_state_function_test 00:08:13.426 ************************************ 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:13.426 00:08:13.426 real 0m8.776s 00:08:13.426 user 0m15.010s 00:08:13.426 sys 0m1.727s 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.426 23:03:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:13.426 23:03:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:13.426 23:03:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.426 23:03:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.426 ************************************ 00:08:13.426 START TEST raid_state_function_test_sb 00:08:13.426 ************************************ 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75621 00:08:13.426 Process raid pid: 75621 00:08:13.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75621' 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75621 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75621 ']' 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.426 23:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.427 [2024-11-18 23:03:32.739982] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:13.427 [2024-11-18 23:03:32.740104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.685 [2024-11-18 23:03:32.897873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.685 [2024-11-18 23:03:32.941711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.685 [2024-11-18 23:03:32.983911] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.685 [2024-11-18 23:03:32.983964] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.254 [2024-11-18 23:03:33.557358] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.254 [2024-11-18 23:03:33.557410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.254 [2024-11-18 23:03:33.557423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.254 [2024-11-18 23:03:33.557432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.254 [2024-11-18 23:03:33.557438] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.254 [2024-11-18 23:03:33.557451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.254 "name": "Existed_Raid", 00:08:14.254 "uuid": "e9168942-3a50-42ac-9ff5-be8ff2182437", 00:08:14.254 "strip_size_kb": 64, 00:08:14.254 "state": "configuring", 00:08:14.254 "raid_level": "raid0", 00:08:14.254 "superblock": true, 00:08:14.254 "num_base_bdevs": 3, 00:08:14.254 "num_base_bdevs_discovered": 0, 00:08:14.254 "num_base_bdevs_operational": 3, 00:08:14.254 "base_bdevs_list": [ 00:08:14.254 { 00:08:14.254 "name": "BaseBdev1", 00:08:14.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.254 "is_configured": false, 00:08:14.254 "data_offset": 0, 00:08:14.254 "data_size": 0 00:08:14.254 }, 00:08:14.254 { 00:08:14.254 "name": "BaseBdev2", 00:08:14.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.254 "is_configured": false, 00:08:14.254 "data_offset": 0, 00:08:14.254 "data_size": 0 00:08:14.254 }, 00:08:14.254 { 00:08:14.254 "name": "BaseBdev3", 00:08:14.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.254 "is_configured": false, 00:08:14.254 "data_offset": 0, 00:08:14.254 "data_size": 0 00:08:14.254 } 00:08:14.254 ] 00:08:14.254 }' 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.254 23:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.824 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.824 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.824 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.824 [2024-11-18 23:03:34.008460] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.824 [2024-11-18 23:03:34.008551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:14.824 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.824 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.824 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.825 [2024-11-18 23:03:34.020492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.825 [2024-11-18 23:03:34.020567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.825 [2024-11-18 23:03:34.020593] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.825 [2024-11-18 23:03:34.020615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.825 [2024-11-18 23:03:34.020633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.825 [2024-11-18 23:03:34.020653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.825 [2024-11-18 23:03:34.041125] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.825 BaseBdev1 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.825 [ 00:08:14.825 { 00:08:14.825 "name": "BaseBdev1", 00:08:14.825 "aliases": [ 00:08:14.825 "2ec70b51-aab2-4e2a-a02a-3aa0cd30c599" 00:08:14.825 ], 00:08:14.825 "product_name": "Malloc disk", 00:08:14.825 "block_size": 512, 00:08:14.825 "num_blocks": 65536, 00:08:14.825 "uuid": "2ec70b51-aab2-4e2a-a02a-3aa0cd30c599", 00:08:14.825 "assigned_rate_limits": { 00:08:14.825 "rw_ios_per_sec": 0, 00:08:14.825 "rw_mbytes_per_sec": 0, 00:08:14.825 "r_mbytes_per_sec": 0, 00:08:14.825 "w_mbytes_per_sec": 0 00:08:14.825 }, 00:08:14.825 "claimed": true, 00:08:14.825 "claim_type": "exclusive_write", 00:08:14.825 "zoned": false, 00:08:14.825 "supported_io_types": { 00:08:14.825 "read": true, 00:08:14.825 "write": true, 00:08:14.825 "unmap": true, 00:08:14.825 "flush": true, 00:08:14.825 "reset": true, 00:08:14.825 "nvme_admin": false, 00:08:14.825 "nvme_io": false, 00:08:14.825 "nvme_io_md": false, 00:08:14.825 "write_zeroes": true, 00:08:14.825 "zcopy": true, 00:08:14.825 "get_zone_info": false, 00:08:14.825 "zone_management": false, 00:08:14.825 "zone_append": false, 00:08:14.825 "compare": false, 00:08:14.825 "compare_and_write": false, 00:08:14.825 "abort": true, 00:08:14.825 "seek_hole": false, 00:08:14.825 "seek_data": false, 00:08:14.825 "copy": true, 00:08:14.825 "nvme_iov_md": false 00:08:14.825 }, 00:08:14.825 "memory_domains": [ 00:08:14.825 { 00:08:14.825 "dma_device_id": "system", 00:08:14.825 "dma_device_type": 1 00:08:14.825 }, 00:08:14.825 { 00:08:14.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.825 "dma_device_type": 2 00:08:14.825 } 00:08:14.825 ], 00:08:14.825 "driver_specific": {} 00:08:14.825 } 00:08:14.825 ] 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.825 "name": "Existed_Raid", 00:08:14.825 "uuid": "1d35c8df-2ebc-448b-8bb8-287b24ceec4d", 00:08:14.825 "strip_size_kb": 64, 00:08:14.825 "state": "configuring", 00:08:14.825 "raid_level": "raid0", 00:08:14.825 "superblock": true, 00:08:14.825 "num_base_bdevs": 3, 00:08:14.825 "num_base_bdevs_discovered": 1, 00:08:14.825 "num_base_bdevs_operational": 3, 00:08:14.825 "base_bdevs_list": [ 00:08:14.825 { 00:08:14.825 "name": "BaseBdev1", 00:08:14.825 "uuid": "2ec70b51-aab2-4e2a-a02a-3aa0cd30c599", 00:08:14.825 "is_configured": true, 00:08:14.825 "data_offset": 2048, 00:08:14.825 "data_size": 63488 00:08:14.825 }, 00:08:14.825 { 00:08:14.825 "name": "BaseBdev2", 00:08:14.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.825 "is_configured": false, 00:08:14.825 "data_offset": 0, 00:08:14.825 "data_size": 0 00:08:14.825 }, 00:08:14.825 { 00:08:14.825 "name": "BaseBdev3", 00:08:14.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.825 "is_configured": false, 00:08:14.825 "data_offset": 0, 00:08:14.825 "data_size": 0 00:08:14.825 } 00:08:14.825 ] 00:08:14.825 }' 00:08:14.825 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.826 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.394 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:15.394 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.394 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.394 [2024-11-18 23:03:34.508376] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.394 [2024-11-18 23:03:34.508417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:15.394 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.394 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:15.394 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.394 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.394 [2024-11-18 23:03:34.520385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.394 [2024-11-18 23:03:34.522240] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.395 [2024-11-18 23:03:34.522316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.395 [2024-11-18 23:03:34.522361] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:15.395 [2024-11-18 23:03:34.522385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.395 "name": "Existed_Raid", 00:08:15.395 "uuid": "29e3d9b4-1e95-4646-a695-ee9a8bb0c593", 00:08:15.395 "strip_size_kb": 64, 00:08:15.395 "state": "configuring", 00:08:15.395 "raid_level": "raid0", 00:08:15.395 "superblock": true, 00:08:15.395 "num_base_bdevs": 3, 00:08:15.395 "num_base_bdevs_discovered": 1, 00:08:15.395 "num_base_bdevs_operational": 3, 00:08:15.395 "base_bdevs_list": [ 00:08:15.395 { 00:08:15.395 "name": "BaseBdev1", 00:08:15.395 "uuid": "2ec70b51-aab2-4e2a-a02a-3aa0cd30c599", 00:08:15.395 "is_configured": true, 00:08:15.395 "data_offset": 2048, 00:08:15.395 "data_size": 63488 00:08:15.395 }, 00:08:15.395 { 00:08:15.395 "name": "BaseBdev2", 00:08:15.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.395 "is_configured": false, 00:08:15.395 "data_offset": 0, 00:08:15.395 "data_size": 0 00:08:15.395 }, 00:08:15.395 { 00:08:15.395 "name": "BaseBdev3", 00:08:15.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.395 "is_configured": false, 00:08:15.395 "data_offset": 0, 00:08:15.395 "data_size": 0 00:08:15.395 } 00:08:15.395 ] 00:08:15.395 }' 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.395 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.655 [2024-11-18 23:03:34.952704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.655 BaseBdev2 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.655 [ 00:08:15.655 { 00:08:15.655 "name": "BaseBdev2", 00:08:15.655 "aliases": [ 00:08:15.655 "810ede07-01a7-45ad-9e53-9bc2cf5a4854" 00:08:15.655 ], 00:08:15.655 "product_name": "Malloc disk", 00:08:15.655 "block_size": 512, 00:08:15.655 "num_blocks": 65536, 00:08:15.655 "uuid": "810ede07-01a7-45ad-9e53-9bc2cf5a4854", 00:08:15.655 "assigned_rate_limits": { 00:08:15.655 "rw_ios_per_sec": 0, 00:08:15.655 "rw_mbytes_per_sec": 0, 00:08:15.655 "r_mbytes_per_sec": 0, 00:08:15.655 "w_mbytes_per_sec": 0 00:08:15.655 }, 00:08:15.655 "claimed": true, 00:08:15.655 "claim_type": "exclusive_write", 00:08:15.655 "zoned": false, 00:08:15.655 "supported_io_types": { 00:08:15.655 "read": true, 00:08:15.655 "write": true, 00:08:15.655 "unmap": true, 00:08:15.655 "flush": true, 00:08:15.655 "reset": true, 00:08:15.655 "nvme_admin": false, 00:08:15.655 "nvme_io": false, 00:08:15.655 "nvme_io_md": false, 00:08:15.655 "write_zeroes": true, 00:08:15.655 "zcopy": true, 00:08:15.655 "get_zone_info": false, 00:08:15.655 "zone_management": false, 00:08:15.655 "zone_append": false, 00:08:15.655 "compare": false, 00:08:15.655 "compare_and_write": false, 00:08:15.655 "abort": true, 00:08:15.655 "seek_hole": false, 00:08:15.655 "seek_data": false, 00:08:15.655 "copy": true, 00:08:15.655 "nvme_iov_md": false 00:08:15.655 }, 00:08:15.655 "memory_domains": [ 00:08:15.655 { 00:08:15.655 "dma_device_id": "system", 00:08:15.655 "dma_device_type": 1 00:08:15.655 }, 00:08:15.655 { 00:08:15.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.655 "dma_device_type": 2 00:08:15.655 } 00:08:15.655 ], 00:08:15.655 "driver_specific": {} 00:08:15.655 } 00:08:15.655 ] 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.655 23:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.655 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.915 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.915 "name": "Existed_Raid", 00:08:15.915 "uuid": "29e3d9b4-1e95-4646-a695-ee9a8bb0c593", 00:08:15.915 "strip_size_kb": 64, 00:08:15.915 "state": "configuring", 00:08:15.915 "raid_level": "raid0", 00:08:15.915 "superblock": true, 00:08:15.915 "num_base_bdevs": 3, 00:08:15.915 "num_base_bdevs_discovered": 2, 00:08:15.915 "num_base_bdevs_operational": 3, 00:08:15.915 "base_bdevs_list": [ 00:08:15.915 { 00:08:15.915 "name": "BaseBdev1", 00:08:15.915 "uuid": "2ec70b51-aab2-4e2a-a02a-3aa0cd30c599", 00:08:15.915 "is_configured": true, 00:08:15.915 "data_offset": 2048, 00:08:15.915 "data_size": 63488 00:08:15.915 }, 00:08:15.915 { 00:08:15.915 "name": "BaseBdev2", 00:08:15.915 "uuid": "810ede07-01a7-45ad-9e53-9bc2cf5a4854", 00:08:15.915 "is_configured": true, 00:08:15.915 "data_offset": 2048, 00:08:15.915 "data_size": 63488 00:08:15.915 }, 00:08:15.915 { 00:08:15.915 "name": "BaseBdev3", 00:08:15.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.915 "is_configured": false, 00:08:15.915 "data_offset": 0, 00:08:15.915 "data_size": 0 00:08:15.915 } 00:08:15.915 ] 00:08:15.915 }' 00:08:15.915 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.915 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.175 BaseBdev3 00:08:16.175 [2024-11-18 23:03:35.446858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:16.175 [2024-11-18 23:03:35.447038] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:16.175 [2024-11-18 23:03:35.447054] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:16.175 [2024-11-18 23:03:35.447372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:16.175 [2024-11-18 23:03:35.447495] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:16.175 [2024-11-18 23:03:35.447506] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:16.175 [2024-11-18 23:03:35.447617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.175 [ 00:08:16.175 { 00:08:16.175 "name": "BaseBdev3", 00:08:16.175 "aliases": [ 00:08:16.175 "1704d7b8-b51f-4f28-b7ad-e9a3f7108aed" 00:08:16.175 ], 00:08:16.175 "product_name": "Malloc disk", 00:08:16.175 "block_size": 512, 00:08:16.175 "num_blocks": 65536, 00:08:16.175 "uuid": "1704d7b8-b51f-4f28-b7ad-e9a3f7108aed", 00:08:16.175 "assigned_rate_limits": { 00:08:16.175 "rw_ios_per_sec": 0, 00:08:16.175 "rw_mbytes_per_sec": 0, 00:08:16.175 "r_mbytes_per_sec": 0, 00:08:16.175 "w_mbytes_per_sec": 0 00:08:16.175 }, 00:08:16.175 "claimed": true, 00:08:16.175 "claim_type": "exclusive_write", 00:08:16.175 "zoned": false, 00:08:16.175 "supported_io_types": { 00:08:16.175 "read": true, 00:08:16.175 "write": true, 00:08:16.175 "unmap": true, 00:08:16.175 "flush": true, 00:08:16.175 "reset": true, 00:08:16.175 "nvme_admin": false, 00:08:16.175 "nvme_io": false, 00:08:16.175 "nvme_io_md": false, 00:08:16.175 "write_zeroes": true, 00:08:16.175 "zcopy": true, 00:08:16.175 "get_zone_info": false, 00:08:16.175 "zone_management": false, 00:08:16.175 "zone_append": false, 00:08:16.175 "compare": false, 00:08:16.175 "compare_and_write": false, 00:08:16.175 "abort": true, 00:08:16.175 "seek_hole": false, 00:08:16.175 "seek_data": false, 00:08:16.175 "copy": true, 00:08:16.175 "nvme_iov_md": false 00:08:16.175 }, 00:08:16.175 "memory_domains": [ 00:08:16.175 { 00:08:16.175 "dma_device_id": "system", 00:08:16.175 "dma_device_type": 1 00:08:16.175 }, 00:08:16.175 { 00:08:16.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.175 "dma_device_type": 2 00:08:16.175 } 00:08:16.175 ], 00:08:16.175 "driver_specific": {} 00:08:16.175 } 00:08:16.175 ] 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.175 "name": "Existed_Raid", 00:08:16.175 "uuid": "29e3d9b4-1e95-4646-a695-ee9a8bb0c593", 00:08:16.175 "strip_size_kb": 64, 00:08:16.175 "state": "online", 00:08:16.175 "raid_level": "raid0", 00:08:16.175 "superblock": true, 00:08:16.175 "num_base_bdevs": 3, 00:08:16.175 "num_base_bdevs_discovered": 3, 00:08:16.175 "num_base_bdevs_operational": 3, 00:08:16.175 "base_bdevs_list": [ 00:08:16.175 { 00:08:16.175 "name": "BaseBdev1", 00:08:16.175 "uuid": "2ec70b51-aab2-4e2a-a02a-3aa0cd30c599", 00:08:16.175 "is_configured": true, 00:08:16.175 "data_offset": 2048, 00:08:16.175 "data_size": 63488 00:08:16.175 }, 00:08:16.175 { 00:08:16.175 "name": "BaseBdev2", 00:08:16.175 "uuid": "810ede07-01a7-45ad-9e53-9bc2cf5a4854", 00:08:16.175 "is_configured": true, 00:08:16.175 "data_offset": 2048, 00:08:16.175 "data_size": 63488 00:08:16.175 }, 00:08:16.175 { 00:08:16.175 "name": "BaseBdev3", 00:08:16.175 "uuid": "1704d7b8-b51f-4f28-b7ad-e9a3f7108aed", 00:08:16.175 "is_configured": true, 00:08:16.175 "data_offset": 2048, 00:08:16.175 "data_size": 63488 00:08:16.175 } 00:08:16.175 ] 00:08:16.175 }' 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.175 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.745 [2024-11-18 23:03:35.926343] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.745 "name": "Existed_Raid", 00:08:16.745 "aliases": [ 00:08:16.745 "29e3d9b4-1e95-4646-a695-ee9a8bb0c593" 00:08:16.745 ], 00:08:16.745 "product_name": "Raid Volume", 00:08:16.745 "block_size": 512, 00:08:16.745 "num_blocks": 190464, 00:08:16.745 "uuid": "29e3d9b4-1e95-4646-a695-ee9a8bb0c593", 00:08:16.745 "assigned_rate_limits": { 00:08:16.745 "rw_ios_per_sec": 0, 00:08:16.745 "rw_mbytes_per_sec": 0, 00:08:16.745 "r_mbytes_per_sec": 0, 00:08:16.745 "w_mbytes_per_sec": 0 00:08:16.745 }, 00:08:16.745 "claimed": false, 00:08:16.745 "zoned": false, 00:08:16.745 "supported_io_types": { 00:08:16.745 "read": true, 00:08:16.745 "write": true, 00:08:16.745 "unmap": true, 00:08:16.745 "flush": true, 00:08:16.745 "reset": true, 00:08:16.745 "nvme_admin": false, 00:08:16.745 "nvme_io": false, 00:08:16.745 "nvme_io_md": false, 00:08:16.745 "write_zeroes": true, 00:08:16.745 "zcopy": false, 00:08:16.745 "get_zone_info": false, 00:08:16.745 "zone_management": false, 00:08:16.745 "zone_append": false, 00:08:16.745 "compare": false, 00:08:16.745 "compare_and_write": false, 00:08:16.745 "abort": false, 00:08:16.745 "seek_hole": false, 00:08:16.745 "seek_data": false, 00:08:16.745 "copy": false, 00:08:16.745 "nvme_iov_md": false 00:08:16.745 }, 00:08:16.745 "memory_domains": [ 00:08:16.745 { 00:08:16.745 "dma_device_id": "system", 00:08:16.745 "dma_device_type": 1 00:08:16.745 }, 00:08:16.745 { 00:08:16.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.745 "dma_device_type": 2 00:08:16.745 }, 00:08:16.745 { 00:08:16.745 "dma_device_id": "system", 00:08:16.745 "dma_device_type": 1 00:08:16.745 }, 00:08:16.745 { 00:08:16.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.745 "dma_device_type": 2 00:08:16.745 }, 00:08:16.745 { 00:08:16.745 "dma_device_id": "system", 00:08:16.745 "dma_device_type": 1 00:08:16.745 }, 00:08:16.745 { 00:08:16.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.745 "dma_device_type": 2 00:08:16.745 } 00:08:16.745 ], 00:08:16.745 "driver_specific": { 00:08:16.745 "raid": { 00:08:16.745 "uuid": "29e3d9b4-1e95-4646-a695-ee9a8bb0c593", 00:08:16.745 "strip_size_kb": 64, 00:08:16.745 "state": "online", 00:08:16.745 "raid_level": "raid0", 00:08:16.745 "superblock": true, 00:08:16.745 "num_base_bdevs": 3, 00:08:16.745 "num_base_bdevs_discovered": 3, 00:08:16.745 "num_base_bdevs_operational": 3, 00:08:16.745 "base_bdevs_list": [ 00:08:16.745 { 00:08:16.745 "name": "BaseBdev1", 00:08:16.745 "uuid": "2ec70b51-aab2-4e2a-a02a-3aa0cd30c599", 00:08:16.745 "is_configured": true, 00:08:16.745 "data_offset": 2048, 00:08:16.745 "data_size": 63488 00:08:16.745 }, 00:08:16.745 { 00:08:16.745 "name": "BaseBdev2", 00:08:16.745 "uuid": "810ede07-01a7-45ad-9e53-9bc2cf5a4854", 00:08:16.745 "is_configured": true, 00:08:16.745 "data_offset": 2048, 00:08:16.745 "data_size": 63488 00:08:16.745 }, 00:08:16.745 { 00:08:16.745 "name": "BaseBdev3", 00:08:16.745 "uuid": "1704d7b8-b51f-4f28-b7ad-e9a3f7108aed", 00:08:16.745 "is_configured": true, 00:08:16.745 "data_offset": 2048, 00:08:16.745 "data_size": 63488 00:08:16.745 } 00:08:16.745 ] 00:08:16.745 } 00:08:16.745 } 00:08:16.745 }' 00:08:16.745 23:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:16.745 BaseBdev2 00:08:16.745 BaseBdev3' 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.745 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.005 [2024-11-18 23:03:36.193646] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:17.005 [2024-11-18 23:03:36.193671] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.005 [2024-11-18 23:03:36.193724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.005 "name": "Existed_Raid", 00:08:17.005 "uuid": "29e3d9b4-1e95-4646-a695-ee9a8bb0c593", 00:08:17.005 "strip_size_kb": 64, 00:08:17.005 "state": "offline", 00:08:17.005 "raid_level": "raid0", 00:08:17.005 "superblock": true, 00:08:17.005 "num_base_bdevs": 3, 00:08:17.005 "num_base_bdevs_discovered": 2, 00:08:17.005 "num_base_bdevs_operational": 2, 00:08:17.005 "base_bdevs_list": [ 00:08:17.005 { 00:08:17.005 "name": null, 00:08:17.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.005 "is_configured": false, 00:08:17.005 "data_offset": 0, 00:08:17.005 "data_size": 63488 00:08:17.005 }, 00:08:17.005 { 00:08:17.005 "name": "BaseBdev2", 00:08:17.005 "uuid": "810ede07-01a7-45ad-9e53-9bc2cf5a4854", 00:08:17.005 "is_configured": true, 00:08:17.005 "data_offset": 2048, 00:08:17.005 "data_size": 63488 00:08:17.005 }, 00:08:17.005 { 00:08:17.005 "name": "BaseBdev3", 00:08:17.005 "uuid": "1704d7b8-b51f-4f28-b7ad-e9a3f7108aed", 00:08:17.005 "is_configured": true, 00:08:17.005 "data_offset": 2048, 00:08:17.005 "data_size": 63488 00:08:17.005 } 00:08:17.005 ] 00:08:17.005 }' 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.005 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.265 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.266 [2024-11-18 23:03:36.620219] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:17.266 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.266 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.266 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.266 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.266 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.266 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.266 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.526 [2024-11-18 23:03:36.691349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:17.526 [2024-11-18 23:03:36.691392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.526 BaseBdev2 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.526 [ 00:08:17.526 { 00:08:17.526 "name": "BaseBdev2", 00:08:17.526 "aliases": [ 00:08:17.526 "e5a20fe0-5c29-47b3-a9ae-f70e50696d3f" 00:08:17.526 ], 00:08:17.526 "product_name": "Malloc disk", 00:08:17.526 "block_size": 512, 00:08:17.526 "num_blocks": 65536, 00:08:17.526 "uuid": "e5a20fe0-5c29-47b3-a9ae-f70e50696d3f", 00:08:17.526 "assigned_rate_limits": { 00:08:17.526 "rw_ios_per_sec": 0, 00:08:17.526 "rw_mbytes_per_sec": 0, 00:08:17.526 "r_mbytes_per_sec": 0, 00:08:17.526 "w_mbytes_per_sec": 0 00:08:17.526 }, 00:08:17.526 "claimed": false, 00:08:17.526 "zoned": false, 00:08:17.526 "supported_io_types": { 00:08:17.526 "read": true, 00:08:17.526 "write": true, 00:08:17.526 "unmap": true, 00:08:17.526 "flush": true, 00:08:17.526 "reset": true, 00:08:17.526 "nvme_admin": false, 00:08:17.526 "nvme_io": false, 00:08:17.526 "nvme_io_md": false, 00:08:17.526 "write_zeroes": true, 00:08:17.526 "zcopy": true, 00:08:17.526 "get_zone_info": false, 00:08:17.526 "zone_management": false, 00:08:17.526 "zone_append": false, 00:08:17.526 "compare": false, 00:08:17.526 "compare_and_write": false, 00:08:17.526 "abort": true, 00:08:17.526 "seek_hole": false, 00:08:17.526 "seek_data": false, 00:08:17.526 "copy": true, 00:08:17.526 "nvme_iov_md": false 00:08:17.526 }, 00:08:17.526 "memory_domains": [ 00:08:17.526 { 00:08:17.526 "dma_device_id": "system", 00:08:17.526 "dma_device_type": 1 00:08:17.526 }, 00:08:17.526 { 00:08:17.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.526 "dma_device_type": 2 00:08:17.526 } 00:08:17.526 ], 00:08:17.526 "driver_specific": {} 00:08:17.526 } 00:08:17.526 ] 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.526 BaseBdev3 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.526 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.527 [ 00:08:17.527 { 00:08:17.527 "name": "BaseBdev3", 00:08:17.527 "aliases": [ 00:08:17.527 "a8253655-1cd5-4066-9d80-cf9eda982af9" 00:08:17.527 ], 00:08:17.527 "product_name": "Malloc disk", 00:08:17.527 "block_size": 512, 00:08:17.527 "num_blocks": 65536, 00:08:17.527 "uuid": "a8253655-1cd5-4066-9d80-cf9eda982af9", 00:08:17.527 "assigned_rate_limits": { 00:08:17.527 "rw_ios_per_sec": 0, 00:08:17.527 "rw_mbytes_per_sec": 0, 00:08:17.527 "r_mbytes_per_sec": 0, 00:08:17.527 "w_mbytes_per_sec": 0 00:08:17.527 }, 00:08:17.527 "claimed": false, 00:08:17.527 "zoned": false, 00:08:17.527 "supported_io_types": { 00:08:17.527 "read": true, 00:08:17.527 "write": true, 00:08:17.527 "unmap": true, 00:08:17.527 "flush": true, 00:08:17.527 "reset": true, 00:08:17.527 "nvme_admin": false, 00:08:17.527 "nvme_io": false, 00:08:17.527 "nvme_io_md": false, 00:08:17.527 "write_zeroes": true, 00:08:17.527 "zcopy": true, 00:08:17.527 "get_zone_info": false, 00:08:17.527 "zone_management": false, 00:08:17.527 "zone_append": false, 00:08:17.527 "compare": false, 00:08:17.527 "compare_and_write": false, 00:08:17.527 "abort": true, 00:08:17.527 "seek_hole": false, 00:08:17.527 "seek_data": false, 00:08:17.527 "copy": true, 00:08:17.527 "nvme_iov_md": false 00:08:17.527 }, 00:08:17.527 "memory_domains": [ 00:08:17.527 { 00:08:17.527 "dma_device_id": "system", 00:08:17.527 "dma_device_type": 1 00:08:17.527 }, 00:08:17.527 { 00:08:17.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.527 "dma_device_type": 2 00:08:17.527 } 00:08:17.527 ], 00:08:17.527 "driver_specific": {} 00:08:17.527 } 00:08:17.527 ] 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.527 [2024-11-18 23:03:36.865826] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.527 [2024-11-18 23:03:36.865918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.527 [2024-11-18 23:03:36.865957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.527 [2024-11-18 23:03:36.867793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.527 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.794 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.794 "name": "Existed_Raid", 00:08:17.794 "uuid": "9ab2fc3d-a4b7-4b28-b050-e6b16ed9070b", 00:08:17.794 "strip_size_kb": 64, 00:08:17.794 "state": "configuring", 00:08:17.794 "raid_level": "raid0", 00:08:17.794 "superblock": true, 00:08:17.794 "num_base_bdevs": 3, 00:08:17.794 "num_base_bdevs_discovered": 2, 00:08:17.794 "num_base_bdevs_operational": 3, 00:08:17.794 "base_bdevs_list": [ 00:08:17.794 { 00:08:17.794 "name": "BaseBdev1", 00:08:17.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.794 "is_configured": false, 00:08:17.794 "data_offset": 0, 00:08:17.794 "data_size": 0 00:08:17.794 }, 00:08:17.794 { 00:08:17.794 "name": "BaseBdev2", 00:08:17.794 "uuid": "e5a20fe0-5c29-47b3-a9ae-f70e50696d3f", 00:08:17.794 "is_configured": true, 00:08:17.794 "data_offset": 2048, 00:08:17.794 "data_size": 63488 00:08:17.794 }, 00:08:17.794 { 00:08:17.794 "name": "BaseBdev3", 00:08:17.794 "uuid": "a8253655-1cd5-4066-9d80-cf9eda982af9", 00:08:17.794 "is_configured": true, 00:08:17.794 "data_offset": 2048, 00:08:17.794 "data_size": 63488 00:08:17.794 } 00:08:17.794 ] 00:08:17.794 }' 00:08:17.794 23:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.794 23:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.063 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:18.063 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.063 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.063 [2024-11-18 23:03:37.313060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.063 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.064 "name": "Existed_Raid", 00:08:18.064 "uuid": "9ab2fc3d-a4b7-4b28-b050-e6b16ed9070b", 00:08:18.064 "strip_size_kb": 64, 00:08:18.064 "state": "configuring", 00:08:18.064 "raid_level": "raid0", 00:08:18.064 "superblock": true, 00:08:18.064 "num_base_bdevs": 3, 00:08:18.064 "num_base_bdevs_discovered": 1, 00:08:18.064 "num_base_bdevs_operational": 3, 00:08:18.064 "base_bdevs_list": [ 00:08:18.064 { 00:08:18.064 "name": "BaseBdev1", 00:08:18.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.064 "is_configured": false, 00:08:18.064 "data_offset": 0, 00:08:18.064 "data_size": 0 00:08:18.064 }, 00:08:18.064 { 00:08:18.064 "name": null, 00:08:18.064 "uuid": "e5a20fe0-5c29-47b3-a9ae-f70e50696d3f", 00:08:18.064 "is_configured": false, 00:08:18.064 "data_offset": 0, 00:08:18.064 "data_size": 63488 00:08:18.064 }, 00:08:18.064 { 00:08:18.064 "name": "BaseBdev3", 00:08:18.064 "uuid": "a8253655-1cd5-4066-9d80-cf9eda982af9", 00:08:18.064 "is_configured": true, 00:08:18.064 "data_offset": 2048, 00:08:18.064 "data_size": 63488 00:08:18.064 } 00:08:18.064 ] 00:08:18.064 }' 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.064 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.632 [2024-11-18 23:03:37.803113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.632 BaseBdev1 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.632 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.632 [ 00:08:18.632 { 00:08:18.632 "name": "BaseBdev1", 00:08:18.632 "aliases": [ 00:08:18.632 "f7ce0606-f5dc-4004-8a8c-6b39a87afe9f" 00:08:18.633 ], 00:08:18.633 "product_name": "Malloc disk", 00:08:18.633 "block_size": 512, 00:08:18.633 "num_blocks": 65536, 00:08:18.633 "uuid": "f7ce0606-f5dc-4004-8a8c-6b39a87afe9f", 00:08:18.633 "assigned_rate_limits": { 00:08:18.633 "rw_ios_per_sec": 0, 00:08:18.633 "rw_mbytes_per_sec": 0, 00:08:18.633 "r_mbytes_per_sec": 0, 00:08:18.633 "w_mbytes_per_sec": 0 00:08:18.633 }, 00:08:18.633 "claimed": true, 00:08:18.633 "claim_type": "exclusive_write", 00:08:18.633 "zoned": false, 00:08:18.633 "supported_io_types": { 00:08:18.633 "read": true, 00:08:18.633 "write": true, 00:08:18.633 "unmap": true, 00:08:18.633 "flush": true, 00:08:18.633 "reset": true, 00:08:18.633 "nvme_admin": false, 00:08:18.633 "nvme_io": false, 00:08:18.633 "nvme_io_md": false, 00:08:18.633 "write_zeroes": true, 00:08:18.633 "zcopy": true, 00:08:18.633 "get_zone_info": false, 00:08:18.633 "zone_management": false, 00:08:18.633 "zone_append": false, 00:08:18.633 "compare": false, 00:08:18.633 "compare_and_write": false, 00:08:18.633 "abort": true, 00:08:18.633 "seek_hole": false, 00:08:18.633 "seek_data": false, 00:08:18.633 "copy": true, 00:08:18.633 "nvme_iov_md": false 00:08:18.633 }, 00:08:18.633 "memory_domains": [ 00:08:18.633 { 00:08:18.633 "dma_device_id": "system", 00:08:18.633 "dma_device_type": 1 00:08:18.633 }, 00:08:18.633 { 00:08:18.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.633 "dma_device_type": 2 00:08:18.633 } 00:08:18.633 ], 00:08:18.633 "driver_specific": {} 00:08:18.633 } 00:08:18.633 ] 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.633 "name": "Existed_Raid", 00:08:18.633 "uuid": "9ab2fc3d-a4b7-4b28-b050-e6b16ed9070b", 00:08:18.633 "strip_size_kb": 64, 00:08:18.633 "state": "configuring", 00:08:18.633 "raid_level": "raid0", 00:08:18.633 "superblock": true, 00:08:18.633 "num_base_bdevs": 3, 00:08:18.633 "num_base_bdevs_discovered": 2, 00:08:18.633 "num_base_bdevs_operational": 3, 00:08:18.633 "base_bdevs_list": [ 00:08:18.633 { 00:08:18.633 "name": "BaseBdev1", 00:08:18.633 "uuid": "f7ce0606-f5dc-4004-8a8c-6b39a87afe9f", 00:08:18.633 "is_configured": true, 00:08:18.633 "data_offset": 2048, 00:08:18.633 "data_size": 63488 00:08:18.633 }, 00:08:18.633 { 00:08:18.633 "name": null, 00:08:18.633 "uuid": "e5a20fe0-5c29-47b3-a9ae-f70e50696d3f", 00:08:18.633 "is_configured": false, 00:08:18.633 "data_offset": 0, 00:08:18.633 "data_size": 63488 00:08:18.633 }, 00:08:18.633 { 00:08:18.633 "name": "BaseBdev3", 00:08:18.633 "uuid": "a8253655-1cd5-4066-9d80-cf9eda982af9", 00:08:18.633 "is_configured": true, 00:08:18.633 "data_offset": 2048, 00:08:18.633 "data_size": 63488 00:08:18.633 } 00:08:18.633 ] 00:08:18.633 }' 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.633 23:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.893 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.893 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:18.893 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.893 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.893 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.153 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:19.153 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:19.153 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.153 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.153 [2024-11-18 23:03:38.286328] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:19.153 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.153 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.153 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.154 "name": "Existed_Raid", 00:08:19.154 "uuid": "9ab2fc3d-a4b7-4b28-b050-e6b16ed9070b", 00:08:19.154 "strip_size_kb": 64, 00:08:19.154 "state": "configuring", 00:08:19.154 "raid_level": "raid0", 00:08:19.154 "superblock": true, 00:08:19.154 "num_base_bdevs": 3, 00:08:19.154 "num_base_bdevs_discovered": 1, 00:08:19.154 "num_base_bdevs_operational": 3, 00:08:19.154 "base_bdevs_list": [ 00:08:19.154 { 00:08:19.154 "name": "BaseBdev1", 00:08:19.154 "uuid": "f7ce0606-f5dc-4004-8a8c-6b39a87afe9f", 00:08:19.154 "is_configured": true, 00:08:19.154 "data_offset": 2048, 00:08:19.154 "data_size": 63488 00:08:19.154 }, 00:08:19.154 { 00:08:19.154 "name": null, 00:08:19.154 "uuid": "e5a20fe0-5c29-47b3-a9ae-f70e50696d3f", 00:08:19.154 "is_configured": false, 00:08:19.154 "data_offset": 0, 00:08:19.154 "data_size": 63488 00:08:19.154 }, 00:08:19.154 { 00:08:19.154 "name": null, 00:08:19.154 "uuid": "a8253655-1cd5-4066-9d80-cf9eda982af9", 00:08:19.154 "is_configured": false, 00:08:19.154 "data_offset": 0, 00:08:19.154 "data_size": 63488 00:08:19.154 } 00:08:19.154 ] 00:08:19.154 }' 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.154 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.413 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.413 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.413 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:19.413 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.413 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.673 [2024-11-18 23:03:38.801461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.673 "name": "Existed_Raid", 00:08:19.673 "uuid": "9ab2fc3d-a4b7-4b28-b050-e6b16ed9070b", 00:08:19.673 "strip_size_kb": 64, 00:08:19.673 "state": "configuring", 00:08:19.673 "raid_level": "raid0", 00:08:19.673 "superblock": true, 00:08:19.673 "num_base_bdevs": 3, 00:08:19.673 "num_base_bdevs_discovered": 2, 00:08:19.673 "num_base_bdevs_operational": 3, 00:08:19.673 "base_bdevs_list": [ 00:08:19.673 { 00:08:19.673 "name": "BaseBdev1", 00:08:19.673 "uuid": "f7ce0606-f5dc-4004-8a8c-6b39a87afe9f", 00:08:19.673 "is_configured": true, 00:08:19.673 "data_offset": 2048, 00:08:19.673 "data_size": 63488 00:08:19.673 }, 00:08:19.673 { 00:08:19.673 "name": null, 00:08:19.673 "uuid": "e5a20fe0-5c29-47b3-a9ae-f70e50696d3f", 00:08:19.673 "is_configured": false, 00:08:19.673 "data_offset": 0, 00:08:19.673 "data_size": 63488 00:08:19.673 }, 00:08:19.673 { 00:08:19.673 "name": "BaseBdev3", 00:08:19.673 "uuid": "a8253655-1cd5-4066-9d80-cf9eda982af9", 00:08:19.673 "is_configured": true, 00:08:19.673 "data_offset": 2048, 00:08:19.673 "data_size": 63488 00:08:19.673 } 00:08:19.673 ] 00:08:19.673 }' 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.673 23:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.938 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:19.939 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.939 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.939 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.939 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.198 [2024-11-18 23:03:39.320572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.198 "name": "Existed_Raid", 00:08:20.198 "uuid": "9ab2fc3d-a4b7-4b28-b050-e6b16ed9070b", 00:08:20.198 "strip_size_kb": 64, 00:08:20.198 "state": "configuring", 00:08:20.198 "raid_level": "raid0", 00:08:20.198 "superblock": true, 00:08:20.198 "num_base_bdevs": 3, 00:08:20.198 "num_base_bdevs_discovered": 1, 00:08:20.198 "num_base_bdevs_operational": 3, 00:08:20.198 "base_bdevs_list": [ 00:08:20.198 { 00:08:20.198 "name": null, 00:08:20.198 "uuid": "f7ce0606-f5dc-4004-8a8c-6b39a87afe9f", 00:08:20.198 "is_configured": false, 00:08:20.198 "data_offset": 0, 00:08:20.198 "data_size": 63488 00:08:20.198 }, 00:08:20.198 { 00:08:20.198 "name": null, 00:08:20.198 "uuid": "e5a20fe0-5c29-47b3-a9ae-f70e50696d3f", 00:08:20.198 "is_configured": false, 00:08:20.198 "data_offset": 0, 00:08:20.198 "data_size": 63488 00:08:20.198 }, 00:08:20.198 { 00:08:20.198 "name": "BaseBdev3", 00:08:20.198 "uuid": "a8253655-1cd5-4066-9d80-cf9eda982af9", 00:08:20.198 "is_configured": true, 00:08:20.198 "data_offset": 2048, 00:08:20.198 "data_size": 63488 00:08:20.198 } 00:08:20.198 ] 00:08:20.198 }' 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.198 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.459 [2024-11-18 23:03:39.786241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.459 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.719 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.719 "name": "Existed_Raid", 00:08:20.719 "uuid": "9ab2fc3d-a4b7-4b28-b050-e6b16ed9070b", 00:08:20.719 "strip_size_kb": 64, 00:08:20.719 "state": "configuring", 00:08:20.719 "raid_level": "raid0", 00:08:20.719 "superblock": true, 00:08:20.719 "num_base_bdevs": 3, 00:08:20.719 "num_base_bdevs_discovered": 2, 00:08:20.719 "num_base_bdevs_operational": 3, 00:08:20.719 "base_bdevs_list": [ 00:08:20.719 { 00:08:20.719 "name": null, 00:08:20.719 "uuid": "f7ce0606-f5dc-4004-8a8c-6b39a87afe9f", 00:08:20.719 "is_configured": false, 00:08:20.719 "data_offset": 0, 00:08:20.719 "data_size": 63488 00:08:20.719 }, 00:08:20.719 { 00:08:20.719 "name": "BaseBdev2", 00:08:20.719 "uuid": "e5a20fe0-5c29-47b3-a9ae-f70e50696d3f", 00:08:20.719 "is_configured": true, 00:08:20.719 "data_offset": 2048, 00:08:20.719 "data_size": 63488 00:08:20.719 }, 00:08:20.719 { 00:08:20.719 "name": "BaseBdev3", 00:08:20.719 "uuid": "a8253655-1cd5-4066-9d80-cf9eda982af9", 00:08:20.719 "is_configured": true, 00:08:20.719 "data_offset": 2048, 00:08:20.719 "data_size": 63488 00:08:20.719 } 00:08:20.719 ] 00:08:20.719 }' 00:08:20.719 23:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.719 23:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f7ce0606-f5dc-4004-8a8c-6b39a87afe9f 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.979 [2024-11-18 23:03:40.348211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:20.979 [2024-11-18 23:03:40.348396] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:20.979 [2024-11-18 23:03:40.348414] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:20.979 NewBaseBdev 00:08:20.979 [2024-11-18 23:03:40.348658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:20.979 [2024-11-18 23:03:40.348769] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:20.979 [2024-11-18 23:03:40.348779] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:20.979 [2024-11-18 23:03:40.348875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.979 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.239 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.239 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:21.239 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.239 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.239 [ 00:08:21.239 { 00:08:21.239 "name": "NewBaseBdev", 00:08:21.239 "aliases": [ 00:08:21.239 "f7ce0606-f5dc-4004-8a8c-6b39a87afe9f" 00:08:21.239 ], 00:08:21.239 "product_name": "Malloc disk", 00:08:21.239 "block_size": 512, 00:08:21.239 "num_blocks": 65536, 00:08:21.239 "uuid": "f7ce0606-f5dc-4004-8a8c-6b39a87afe9f", 00:08:21.239 "assigned_rate_limits": { 00:08:21.239 "rw_ios_per_sec": 0, 00:08:21.240 "rw_mbytes_per_sec": 0, 00:08:21.240 "r_mbytes_per_sec": 0, 00:08:21.240 "w_mbytes_per_sec": 0 00:08:21.240 }, 00:08:21.240 "claimed": true, 00:08:21.240 "claim_type": "exclusive_write", 00:08:21.240 "zoned": false, 00:08:21.240 "supported_io_types": { 00:08:21.240 "read": true, 00:08:21.240 "write": true, 00:08:21.240 "unmap": true, 00:08:21.240 "flush": true, 00:08:21.240 "reset": true, 00:08:21.240 "nvme_admin": false, 00:08:21.240 "nvme_io": false, 00:08:21.240 "nvme_io_md": false, 00:08:21.240 "write_zeroes": true, 00:08:21.240 "zcopy": true, 00:08:21.240 "get_zone_info": false, 00:08:21.240 "zone_management": false, 00:08:21.240 "zone_append": false, 00:08:21.240 "compare": false, 00:08:21.240 "compare_and_write": false, 00:08:21.240 "abort": true, 00:08:21.240 "seek_hole": false, 00:08:21.240 "seek_data": false, 00:08:21.240 "copy": true, 00:08:21.240 "nvme_iov_md": false 00:08:21.240 }, 00:08:21.240 "memory_domains": [ 00:08:21.240 { 00:08:21.240 "dma_device_id": "system", 00:08:21.240 "dma_device_type": 1 00:08:21.240 }, 00:08:21.240 { 00:08:21.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.240 "dma_device_type": 2 00:08:21.240 } 00:08:21.240 ], 00:08:21.240 "driver_specific": {} 00:08:21.240 } 00:08:21.240 ] 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.240 "name": "Existed_Raid", 00:08:21.240 "uuid": "9ab2fc3d-a4b7-4b28-b050-e6b16ed9070b", 00:08:21.240 "strip_size_kb": 64, 00:08:21.240 "state": "online", 00:08:21.240 "raid_level": "raid0", 00:08:21.240 "superblock": true, 00:08:21.240 "num_base_bdevs": 3, 00:08:21.240 "num_base_bdevs_discovered": 3, 00:08:21.240 "num_base_bdevs_operational": 3, 00:08:21.240 "base_bdevs_list": [ 00:08:21.240 { 00:08:21.240 "name": "NewBaseBdev", 00:08:21.240 "uuid": "f7ce0606-f5dc-4004-8a8c-6b39a87afe9f", 00:08:21.240 "is_configured": true, 00:08:21.240 "data_offset": 2048, 00:08:21.240 "data_size": 63488 00:08:21.240 }, 00:08:21.240 { 00:08:21.240 "name": "BaseBdev2", 00:08:21.240 "uuid": "e5a20fe0-5c29-47b3-a9ae-f70e50696d3f", 00:08:21.240 "is_configured": true, 00:08:21.240 "data_offset": 2048, 00:08:21.240 "data_size": 63488 00:08:21.240 }, 00:08:21.240 { 00:08:21.240 "name": "BaseBdev3", 00:08:21.240 "uuid": "a8253655-1cd5-4066-9d80-cf9eda982af9", 00:08:21.240 "is_configured": true, 00:08:21.240 "data_offset": 2048, 00:08:21.240 "data_size": 63488 00:08:21.240 } 00:08:21.240 ] 00:08:21.240 }' 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.240 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.500 [2024-11-18 23:03:40.779772] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.500 "name": "Existed_Raid", 00:08:21.500 "aliases": [ 00:08:21.500 "9ab2fc3d-a4b7-4b28-b050-e6b16ed9070b" 00:08:21.500 ], 00:08:21.500 "product_name": "Raid Volume", 00:08:21.500 "block_size": 512, 00:08:21.500 "num_blocks": 190464, 00:08:21.500 "uuid": "9ab2fc3d-a4b7-4b28-b050-e6b16ed9070b", 00:08:21.500 "assigned_rate_limits": { 00:08:21.500 "rw_ios_per_sec": 0, 00:08:21.500 "rw_mbytes_per_sec": 0, 00:08:21.500 "r_mbytes_per_sec": 0, 00:08:21.500 "w_mbytes_per_sec": 0 00:08:21.500 }, 00:08:21.500 "claimed": false, 00:08:21.500 "zoned": false, 00:08:21.500 "supported_io_types": { 00:08:21.500 "read": true, 00:08:21.500 "write": true, 00:08:21.500 "unmap": true, 00:08:21.500 "flush": true, 00:08:21.500 "reset": true, 00:08:21.500 "nvme_admin": false, 00:08:21.500 "nvme_io": false, 00:08:21.500 "nvme_io_md": false, 00:08:21.500 "write_zeroes": true, 00:08:21.500 "zcopy": false, 00:08:21.500 "get_zone_info": false, 00:08:21.500 "zone_management": false, 00:08:21.500 "zone_append": false, 00:08:21.500 "compare": false, 00:08:21.500 "compare_and_write": false, 00:08:21.500 "abort": false, 00:08:21.500 "seek_hole": false, 00:08:21.500 "seek_data": false, 00:08:21.500 "copy": false, 00:08:21.500 "nvme_iov_md": false 00:08:21.500 }, 00:08:21.500 "memory_domains": [ 00:08:21.500 { 00:08:21.500 "dma_device_id": "system", 00:08:21.500 "dma_device_type": 1 00:08:21.500 }, 00:08:21.500 { 00:08:21.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.500 "dma_device_type": 2 00:08:21.500 }, 00:08:21.500 { 00:08:21.500 "dma_device_id": "system", 00:08:21.500 "dma_device_type": 1 00:08:21.500 }, 00:08:21.500 { 00:08:21.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.500 "dma_device_type": 2 00:08:21.500 }, 00:08:21.500 { 00:08:21.500 "dma_device_id": "system", 00:08:21.500 "dma_device_type": 1 00:08:21.500 }, 00:08:21.500 { 00:08:21.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.500 "dma_device_type": 2 00:08:21.500 } 00:08:21.500 ], 00:08:21.500 "driver_specific": { 00:08:21.500 "raid": { 00:08:21.500 "uuid": "9ab2fc3d-a4b7-4b28-b050-e6b16ed9070b", 00:08:21.500 "strip_size_kb": 64, 00:08:21.500 "state": "online", 00:08:21.500 "raid_level": "raid0", 00:08:21.500 "superblock": true, 00:08:21.500 "num_base_bdevs": 3, 00:08:21.500 "num_base_bdevs_discovered": 3, 00:08:21.500 "num_base_bdevs_operational": 3, 00:08:21.500 "base_bdevs_list": [ 00:08:21.500 { 00:08:21.500 "name": "NewBaseBdev", 00:08:21.500 "uuid": "f7ce0606-f5dc-4004-8a8c-6b39a87afe9f", 00:08:21.500 "is_configured": true, 00:08:21.500 "data_offset": 2048, 00:08:21.500 "data_size": 63488 00:08:21.500 }, 00:08:21.500 { 00:08:21.500 "name": "BaseBdev2", 00:08:21.500 "uuid": "e5a20fe0-5c29-47b3-a9ae-f70e50696d3f", 00:08:21.500 "is_configured": true, 00:08:21.500 "data_offset": 2048, 00:08:21.500 "data_size": 63488 00:08:21.500 }, 00:08:21.500 { 00:08:21.500 "name": "BaseBdev3", 00:08:21.500 "uuid": "a8253655-1cd5-4066-9d80-cf9eda982af9", 00:08:21.500 "is_configured": true, 00:08:21.500 "data_offset": 2048, 00:08:21.500 "data_size": 63488 00:08:21.500 } 00:08:21.500 ] 00:08:21.500 } 00:08:21.500 } 00:08:21.500 }' 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.500 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:21.501 BaseBdev2 00:08:21.501 BaseBdev3' 00:08:21.501 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.761 23:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.761 [2024-11-18 23:03:41.027242] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.761 [2024-11-18 23:03:41.027267] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.761 [2024-11-18 23:03:41.027334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.761 [2024-11-18 23:03:41.027382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.761 [2024-11-18 23:03:41.027393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75621 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75621 ']' 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75621 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75621 00:08:21.761 killing process with pid 75621 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75621' 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75621 00:08:21.761 [2024-11-18 23:03:41.072311] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.761 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75621 00:08:21.761 [2024-11-18 23:03:41.102644] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.020 23:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:22.020 00:08:22.020 real 0m8.697s 00:08:22.020 user 0m14.943s 00:08:22.020 sys 0m1.653s 00:08:22.020 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.021 23:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.021 ************************************ 00:08:22.021 END TEST raid_state_function_test_sb 00:08:22.021 ************************************ 00:08:22.281 23:03:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:22.281 23:03:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:22.281 23:03:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.281 23:03:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.281 ************************************ 00:08:22.281 START TEST raid_superblock_test 00:08:22.281 ************************************ 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76224 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76224 00:08:22.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76224 ']' 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.281 23:03:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.281 [2024-11-18 23:03:41.500899] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.281 [2024-11-18 23:03:41.501035] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76224 ] 00:08:22.541 [2024-11-18 23:03:41.659725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.541 [2024-11-18 23:03:41.704229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.541 [2024-11-18 23:03:41.746596] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.541 [2024-11-18 23:03:41.746715] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.110 malloc1 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.110 [2024-11-18 23:03:42.337134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:23.110 [2024-11-18 23:03:42.337272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.110 [2024-11-18 23:03:42.337326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:23.110 [2024-11-18 23:03:42.337383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.110 [2024-11-18 23:03:42.339485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.110 [2024-11-18 23:03:42.339558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:23.110 pt1 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:23.110 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.111 malloc2 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.111 [2024-11-18 23:03:42.383869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:23.111 [2024-11-18 23:03:42.384000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.111 [2024-11-18 23:03:42.384051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:23.111 [2024-11-18 23:03:42.384103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.111 [2024-11-18 23:03:42.387268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.111 [2024-11-18 23:03:42.387383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:23.111 pt2 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.111 malloc3 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.111 [2024-11-18 23:03:42.416755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:23.111 [2024-11-18 23:03:42.416858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.111 [2024-11-18 23:03:42.416919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:23.111 [2024-11-18 23:03:42.416952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.111 [2024-11-18 23:03:42.419010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.111 [2024-11-18 23:03:42.419079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:23.111 pt3 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.111 [2024-11-18 23:03:42.428783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:23.111 [2024-11-18 23:03:42.430667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:23.111 [2024-11-18 23:03:42.430729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:23.111 [2024-11-18 23:03:42.430865] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:23.111 [2024-11-18 23:03:42.430876] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:23.111 [2024-11-18 23:03:42.431108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:23.111 [2024-11-18 23:03:42.431241] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:23.111 [2024-11-18 23:03:42.431255] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:23.111 [2024-11-18 23:03:42.431392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.111 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.371 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.371 "name": "raid_bdev1", 00:08:23.371 "uuid": "f534946d-cfa6-43ac-9f96-4786971d1c3a", 00:08:23.371 "strip_size_kb": 64, 00:08:23.371 "state": "online", 00:08:23.371 "raid_level": "raid0", 00:08:23.371 "superblock": true, 00:08:23.371 "num_base_bdevs": 3, 00:08:23.371 "num_base_bdevs_discovered": 3, 00:08:23.371 "num_base_bdevs_operational": 3, 00:08:23.371 "base_bdevs_list": [ 00:08:23.371 { 00:08:23.371 "name": "pt1", 00:08:23.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:23.371 "is_configured": true, 00:08:23.371 "data_offset": 2048, 00:08:23.371 "data_size": 63488 00:08:23.371 }, 00:08:23.371 { 00:08:23.371 "name": "pt2", 00:08:23.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.371 "is_configured": true, 00:08:23.371 "data_offset": 2048, 00:08:23.371 "data_size": 63488 00:08:23.371 }, 00:08:23.371 { 00:08:23.371 "name": "pt3", 00:08:23.371 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:23.371 "is_configured": true, 00:08:23.371 "data_offset": 2048, 00:08:23.371 "data_size": 63488 00:08:23.371 } 00:08:23.371 ] 00:08:23.371 }' 00:08:23.371 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.371 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.630 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:23.630 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:23.630 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.630 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.631 [2024-11-18 23:03:42.852336] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.631 "name": "raid_bdev1", 00:08:23.631 "aliases": [ 00:08:23.631 "f534946d-cfa6-43ac-9f96-4786971d1c3a" 00:08:23.631 ], 00:08:23.631 "product_name": "Raid Volume", 00:08:23.631 "block_size": 512, 00:08:23.631 "num_blocks": 190464, 00:08:23.631 "uuid": "f534946d-cfa6-43ac-9f96-4786971d1c3a", 00:08:23.631 "assigned_rate_limits": { 00:08:23.631 "rw_ios_per_sec": 0, 00:08:23.631 "rw_mbytes_per_sec": 0, 00:08:23.631 "r_mbytes_per_sec": 0, 00:08:23.631 "w_mbytes_per_sec": 0 00:08:23.631 }, 00:08:23.631 "claimed": false, 00:08:23.631 "zoned": false, 00:08:23.631 "supported_io_types": { 00:08:23.631 "read": true, 00:08:23.631 "write": true, 00:08:23.631 "unmap": true, 00:08:23.631 "flush": true, 00:08:23.631 "reset": true, 00:08:23.631 "nvme_admin": false, 00:08:23.631 "nvme_io": false, 00:08:23.631 "nvme_io_md": false, 00:08:23.631 "write_zeroes": true, 00:08:23.631 "zcopy": false, 00:08:23.631 "get_zone_info": false, 00:08:23.631 "zone_management": false, 00:08:23.631 "zone_append": false, 00:08:23.631 "compare": false, 00:08:23.631 "compare_and_write": false, 00:08:23.631 "abort": false, 00:08:23.631 "seek_hole": false, 00:08:23.631 "seek_data": false, 00:08:23.631 "copy": false, 00:08:23.631 "nvme_iov_md": false 00:08:23.631 }, 00:08:23.631 "memory_domains": [ 00:08:23.631 { 00:08:23.631 "dma_device_id": "system", 00:08:23.631 "dma_device_type": 1 00:08:23.631 }, 00:08:23.631 { 00:08:23.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.631 "dma_device_type": 2 00:08:23.631 }, 00:08:23.631 { 00:08:23.631 "dma_device_id": "system", 00:08:23.631 "dma_device_type": 1 00:08:23.631 }, 00:08:23.631 { 00:08:23.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.631 "dma_device_type": 2 00:08:23.631 }, 00:08:23.631 { 00:08:23.631 "dma_device_id": "system", 00:08:23.631 "dma_device_type": 1 00:08:23.631 }, 00:08:23.631 { 00:08:23.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.631 "dma_device_type": 2 00:08:23.631 } 00:08:23.631 ], 00:08:23.631 "driver_specific": { 00:08:23.631 "raid": { 00:08:23.631 "uuid": "f534946d-cfa6-43ac-9f96-4786971d1c3a", 00:08:23.631 "strip_size_kb": 64, 00:08:23.631 "state": "online", 00:08:23.631 "raid_level": "raid0", 00:08:23.631 "superblock": true, 00:08:23.631 "num_base_bdevs": 3, 00:08:23.631 "num_base_bdevs_discovered": 3, 00:08:23.631 "num_base_bdevs_operational": 3, 00:08:23.631 "base_bdevs_list": [ 00:08:23.631 { 00:08:23.631 "name": "pt1", 00:08:23.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:23.631 "is_configured": true, 00:08:23.631 "data_offset": 2048, 00:08:23.631 "data_size": 63488 00:08:23.631 }, 00:08:23.631 { 00:08:23.631 "name": "pt2", 00:08:23.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.631 "is_configured": true, 00:08:23.631 "data_offset": 2048, 00:08:23.631 "data_size": 63488 00:08:23.631 }, 00:08:23.631 { 00:08:23.631 "name": "pt3", 00:08:23.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:23.631 "is_configured": true, 00:08:23.631 "data_offset": 2048, 00:08:23.631 "data_size": 63488 00:08:23.631 } 00:08:23.631 ] 00:08:23.631 } 00:08:23.631 } 00:08:23.631 }' 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:23.631 pt2 00:08:23.631 pt3' 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.631 23:03:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:23.891 [2024-11-18 23:03:43.103831] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f534946d-cfa6-43ac-9f96-4786971d1c3a 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f534946d-cfa6-43ac-9f96-4786971d1c3a ']' 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.891 [2024-11-18 23:03:43.155472] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.891 [2024-11-18 23:03:43.155532] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.891 [2024-11-18 23:03:43.155628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.891 [2024-11-18 23:03:43.155718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.891 [2024-11-18 23:03:43.155766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.891 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.892 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.892 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:23.892 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:23.892 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.892 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.151 [2024-11-18 23:03:43.303277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:24.151 [2024-11-18 23:03:43.305109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:24.151 [2024-11-18 23:03:43.305157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:24.151 [2024-11-18 23:03:43.305204] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:24.151 [2024-11-18 23:03:43.305252] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:24.151 [2024-11-18 23:03:43.305272] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:24.151 [2024-11-18 23:03:43.305295] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.151 [2024-11-18 23:03:43.305305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:24.151 request: 00:08:24.151 { 00:08:24.151 "name": "raid_bdev1", 00:08:24.151 "raid_level": "raid0", 00:08:24.151 "base_bdevs": [ 00:08:24.151 "malloc1", 00:08:24.151 "malloc2", 00:08:24.151 "malloc3" 00:08:24.151 ], 00:08:24.151 "strip_size_kb": 64, 00:08:24.151 "superblock": false, 00:08:24.151 "method": "bdev_raid_create", 00:08:24.151 "req_id": 1 00:08:24.151 } 00:08:24.151 Got JSON-RPC error response 00:08:24.151 response: 00:08:24.151 { 00:08:24.151 "code": -17, 00:08:24.151 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:24.151 } 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.151 [2024-11-18 23:03:43.367167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:24.151 [2024-11-18 23:03:43.367252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.151 [2024-11-18 23:03:43.367306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:24.151 [2024-11-18 23:03:43.367341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.151 [2024-11-18 23:03:43.369419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.151 [2024-11-18 23:03:43.369486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:24.151 [2024-11-18 23:03:43.369567] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:24.151 [2024-11-18 23:03:43.369628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:24.151 pt1 00:08:24.151 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.152 "name": "raid_bdev1", 00:08:24.152 "uuid": "f534946d-cfa6-43ac-9f96-4786971d1c3a", 00:08:24.152 "strip_size_kb": 64, 00:08:24.152 "state": "configuring", 00:08:24.152 "raid_level": "raid0", 00:08:24.152 "superblock": true, 00:08:24.152 "num_base_bdevs": 3, 00:08:24.152 "num_base_bdevs_discovered": 1, 00:08:24.152 "num_base_bdevs_operational": 3, 00:08:24.152 "base_bdevs_list": [ 00:08:24.152 { 00:08:24.152 "name": "pt1", 00:08:24.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.152 "is_configured": true, 00:08:24.152 "data_offset": 2048, 00:08:24.152 "data_size": 63488 00:08:24.152 }, 00:08:24.152 { 00:08:24.152 "name": null, 00:08:24.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.152 "is_configured": false, 00:08:24.152 "data_offset": 2048, 00:08:24.152 "data_size": 63488 00:08:24.152 }, 00:08:24.152 { 00:08:24.152 "name": null, 00:08:24.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.152 "is_configured": false, 00:08:24.152 "data_offset": 2048, 00:08:24.152 "data_size": 63488 00:08:24.152 } 00:08:24.152 ] 00:08:24.152 }' 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.152 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.732 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:24.732 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:24.732 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.732 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.733 [2024-11-18 23:03:43.830436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:24.733 [2024-11-18 23:03:43.830532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.733 [2024-11-18 23:03:43.830552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:24.733 [2024-11-18 23:03:43.830565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.733 [2024-11-18 23:03:43.830923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.733 [2024-11-18 23:03:43.830943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:24.733 [2024-11-18 23:03:43.831006] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:24.733 [2024-11-18 23:03:43.831027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.733 pt2 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.733 [2024-11-18 23:03:43.838429] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.733 "name": "raid_bdev1", 00:08:24.733 "uuid": "f534946d-cfa6-43ac-9f96-4786971d1c3a", 00:08:24.733 "strip_size_kb": 64, 00:08:24.733 "state": "configuring", 00:08:24.733 "raid_level": "raid0", 00:08:24.733 "superblock": true, 00:08:24.733 "num_base_bdevs": 3, 00:08:24.733 "num_base_bdevs_discovered": 1, 00:08:24.733 "num_base_bdevs_operational": 3, 00:08:24.733 "base_bdevs_list": [ 00:08:24.733 { 00:08:24.733 "name": "pt1", 00:08:24.733 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.733 "is_configured": true, 00:08:24.733 "data_offset": 2048, 00:08:24.733 "data_size": 63488 00:08:24.733 }, 00:08:24.733 { 00:08:24.733 "name": null, 00:08:24.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.733 "is_configured": false, 00:08:24.733 "data_offset": 0, 00:08:24.733 "data_size": 63488 00:08:24.733 }, 00:08:24.733 { 00:08:24.733 "name": null, 00:08:24.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.733 "is_configured": false, 00:08:24.733 "data_offset": 2048, 00:08:24.733 "data_size": 63488 00:08:24.733 } 00:08:24.733 ] 00:08:24.733 }' 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.733 23:03:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.992 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:24.992 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:24.992 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:24.992 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.992 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.992 [2024-11-18 23:03:44.285641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:24.992 [2024-11-18 23:03:44.285733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.992 [2024-11-18 23:03:44.285766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:24.992 [2024-11-18 23:03:44.285792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.992 [2024-11-18 23:03:44.286176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.992 [2024-11-18 23:03:44.286229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:24.992 [2024-11-18 23:03:44.286328] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:24.992 [2024-11-18 23:03:44.286377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.992 pt2 00:08:24.992 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.992 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:24.992 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:24.992 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:24.992 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.992 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.993 [2024-11-18 23:03:44.297601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:24.993 [2024-11-18 23:03:44.297677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.993 [2024-11-18 23:03:44.297709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:24.993 [2024-11-18 23:03:44.297734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.993 [2024-11-18 23:03:44.298051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.993 [2024-11-18 23:03:44.298102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:24.993 [2024-11-18 23:03:44.298182] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:24.993 [2024-11-18 23:03:44.298222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:24.993 [2024-11-18 23:03:44.298343] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:24.993 [2024-11-18 23:03:44.298380] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:24.993 [2024-11-18 23:03:44.298611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:24.993 [2024-11-18 23:03:44.298744] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:24.993 [2024-11-18 23:03:44.298781] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:24.993 [2024-11-18 23:03:44.298907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.993 pt3 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.993 "name": "raid_bdev1", 00:08:24.993 "uuid": "f534946d-cfa6-43ac-9f96-4786971d1c3a", 00:08:24.993 "strip_size_kb": 64, 00:08:24.993 "state": "online", 00:08:24.993 "raid_level": "raid0", 00:08:24.993 "superblock": true, 00:08:24.993 "num_base_bdevs": 3, 00:08:24.993 "num_base_bdevs_discovered": 3, 00:08:24.993 "num_base_bdevs_operational": 3, 00:08:24.993 "base_bdevs_list": [ 00:08:24.993 { 00:08:24.993 "name": "pt1", 00:08:24.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.993 "is_configured": true, 00:08:24.993 "data_offset": 2048, 00:08:24.993 "data_size": 63488 00:08:24.993 }, 00:08:24.993 { 00:08:24.993 "name": "pt2", 00:08:24.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.993 "is_configured": true, 00:08:24.993 "data_offset": 2048, 00:08:24.993 "data_size": 63488 00:08:24.993 }, 00:08:24.993 { 00:08:24.993 "name": "pt3", 00:08:24.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.993 "is_configured": true, 00:08:24.993 "data_offset": 2048, 00:08:24.993 "data_size": 63488 00:08:24.993 } 00:08:24.993 ] 00:08:24.993 }' 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.993 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.562 [2024-11-18 23:03:44.685228] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.562 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.562 "name": "raid_bdev1", 00:08:25.562 "aliases": [ 00:08:25.562 "f534946d-cfa6-43ac-9f96-4786971d1c3a" 00:08:25.562 ], 00:08:25.562 "product_name": "Raid Volume", 00:08:25.562 "block_size": 512, 00:08:25.562 "num_blocks": 190464, 00:08:25.562 "uuid": "f534946d-cfa6-43ac-9f96-4786971d1c3a", 00:08:25.562 "assigned_rate_limits": { 00:08:25.562 "rw_ios_per_sec": 0, 00:08:25.562 "rw_mbytes_per_sec": 0, 00:08:25.562 "r_mbytes_per_sec": 0, 00:08:25.562 "w_mbytes_per_sec": 0 00:08:25.562 }, 00:08:25.562 "claimed": false, 00:08:25.562 "zoned": false, 00:08:25.562 "supported_io_types": { 00:08:25.562 "read": true, 00:08:25.562 "write": true, 00:08:25.562 "unmap": true, 00:08:25.562 "flush": true, 00:08:25.562 "reset": true, 00:08:25.562 "nvme_admin": false, 00:08:25.562 "nvme_io": false, 00:08:25.562 "nvme_io_md": false, 00:08:25.562 "write_zeroes": true, 00:08:25.562 "zcopy": false, 00:08:25.562 "get_zone_info": false, 00:08:25.562 "zone_management": false, 00:08:25.562 "zone_append": false, 00:08:25.562 "compare": false, 00:08:25.562 "compare_and_write": false, 00:08:25.562 "abort": false, 00:08:25.562 "seek_hole": false, 00:08:25.562 "seek_data": false, 00:08:25.562 "copy": false, 00:08:25.562 "nvme_iov_md": false 00:08:25.562 }, 00:08:25.562 "memory_domains": [ 00:08:25.562 { 00:08:25.562 "dma_device_id": "system", 00:08:25.562 "dma_device_type": 1 00:08:25.562 }, 00:08:25.562 { 00:08:25.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.562 "dma_device_type": 2 00:08:25.562 }, 00:08:25.562 { 00:08:25.562 "dma_device_id": "system", 00:08:25.562 "dma_device_type": 1 00:08:25.562 }, 00:08:25.562 { 00:08:25.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.563 "dma_device_type": 2 00:08:25.563 }, 00:08:25.563 { 00:08:25.563 "dma_device_id": "system", 00:08:25.563 "dma_device_type": 1 00:08:25.563 }, 00:08:25.563 { 00:08:25.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.563 "dma_device_type": 2 00:08:25.563 } 00:08:25.563 ], 00:08:25.563 "driver_specific": { 00:08:25.563 "raid": { 00:08:25.563 "uuid": "f534946d-cfa6-43ac-9f96-4786971d1c3a", 00:08:25.563 "strip_size_kb": 64, 00:08:25.563 "state": "online", 00:08:25.563 "raid_level": "raid0", 00:08:25.563 "superblock": true, 00:08:25.563 "num_base_bdevs": 3, 00:08:25.563 "num_base_bdevs_discovered": 3, 00:08:25.563 "num_base_bdevs_operational": 3, 00:08:25.563 "base_bdevs_list": [ 00:08:25.563 { 00:08:25.563 "name": "pt1", 00:08:25.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.563 "is_configured": true, 00:08:25.563 "data_offset": 2048, 00:08:25.563 "data_size": 63488 00:08:25.563 }, 00:08:25.563 { 00:08:25.563 "name": "pt2", 00:08:25.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.563 "is_configured": true, 00:08:25.563 "data_offset": 2048, 00:08:25.563 "data_size": 63488 00:08:25.563 }, 00:08:25.563 { 00:08:25.563 "name": "pt3", 00:08:25.563 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:25.563 "is_configured": true, 00:08:25.563 "data_offset": 2048, 00:08:25.563 "data_size": 63488 00:08:25.563 } 00:08:25.563 ] 00:08:25.563 } 00:08:25.563 } 00:08:25.563 }' 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:25.563 pt2 00:08:25.563 pt3' 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.563 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.832 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.832 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.832 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:25.832 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.832 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.832 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.832 [2024-11-18 23:03:44.960767] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.832 23:03:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.832 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f534946d-cfa6-43ac-9f96-4786971d1c3a '!=' f534946d-cfa6-43ac-9f96-4786971d1c3a ']' 00:08:25.832 23:03:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:25.832 23:03:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.832 23:03:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:25.833 23:03:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76224 00:08:25.833 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76224 ']' 00:08:25.833 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76224 00:08:25.833 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:25.833 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.833 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76224 00:08:25.833 killing process with pid 76224 00:08:25.833 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:25.833 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:25.833 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76224' 00:08:25.833 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76224 00:08:25.833 [2024-11-18 23:03:45.045670] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.833 [2024-11-18 23:03:45.045746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.833 [2024-11-18 23:03:45.045809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.833 [2024-11-18 23:03:45.045818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:25.833 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76224 00:08:25.833 [2024-11-18 23:03:45.078466] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.111 ************************************ 00:08:26.111 END TEST raid_superblock_test 00:08:26.111 ************************************ 00:08:26.111 23:03:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:26.111 00:08:26.111 real 0m3.898s 00:08:26.111 user 0m6.171s 00:08:26.111 sys 0m0.812s 00:08:26.111 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.111 23:03:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.111 23:03:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:26.111 23:03:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:26.111 23:03:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.111 23:03:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.111 ************************************ 00:08:26.111 START TEST raid_read_error_test 00:08:26.111 ************************************ 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.S4sXXRS87e 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76462 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76462 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76462 ']' 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.111 23:03:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.111 [2024-11-18 23:03:45.486502] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:26.111 [2024-11-18 23:03:45.487024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76462 ] 00:08:26.371 [2024-11-18 23:03:45.646865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.371 [2024-11-18 23:03:45.692714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.371 [2024-11-18 23:03:45.734838] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.371 [2024-11-18 23:03:45.734883] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.939 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.939 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:26.939 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:26.939 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:26.939 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.939 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.197 BaseBdev1_malloc 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.197 true 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.197 [2024-11-18 23:03:46.341273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:27.197 [2024-11-18 23:03:46.341334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.197 [2024-11-18 23:03:46.341371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:27.197 [2024-11-18 23:03:46.341379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.197 [2024-11-18 23:03:46.343451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.197 [2024-11-18 23:03:46.343486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:27.197 BaseBdev1 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.197 BaseBdev2_malloc 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.197 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.197 true 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.198 [2024-11-18 23:03:46.397038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:27.198 [2024-11-18 23:03:46.397088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.198 [2024-11-18 23:03:46.397107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:27.198 [2024-11-18 23:03:46.397116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.198 [2024-11-18 23:03:46.399193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.198 [2024-11-18 23:03:46.399272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:27.198 BaseBdev2 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.198 BaseBdev3_malloc 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.198 true 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.198 [2024-11-18 23:03:46.437399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:27.198 [2024-11-18 23:03:46.437439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.198 [2024-11-18 23:03:46.437472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:27.198 [2024-11-18 23:03:46.437480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.198 [2024-11-18 23:03:46.439453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.198 [2024-11-18 23:03:46.439487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:27.198 BaseBdev3 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.198 [2024-11-18 23:03:46.449439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.198 [2024-11-18 23:03:46.451175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.198 [2024-11-18 23:03:46.451252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.198 [2024-11-18 23:03:46.451430] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:27.198 [2024-11-18 23:03:46.451451] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:27.198 [2024-11-18 23:03:46.451685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:27.198 [2024-11-18 23:03:46.451807] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:27.198 [2024-11-18 23:03:46.451816] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:27.198 [2024-11-18 23:03:46.451936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.198 "name": "raid_bdev1", 00:08:27.198 "uuid": "a86062da-8991-45d2-bf5f-d278a22b1f04", 00:08:27.198 "strip_size_kb": 64, 00:08:27.198 "state": "online", 00:08:27.198 "raid_level": "raid0", 00:08:27.198 "superblock": true, 00:08:27.198 "num_base_bdevs": 3, 00:08:27.198 "num_base_bdevs_discovered": 3, 00:08:27.198 "num_base_bdevs_operational": 3, 00:08:27.198 "base_bdevs_list": [ 00:08:27.198 { 00:08:27.198 "name": "BaseBdev1", 00:08:27.198 "uuid": "df3832db-6ecd-52a3-939e-57362ce37dab", 00:08:27.198 "is_configured": true, 00:08:27.198 "data_offset": 2048, 00:08:27.198 "data_size": 63488 00:08:27.198 }, 00:08:27.198 { 00:08:27.198 "name": "BaseBdev2", 00:08:27.198 "uuid": "aa87c70b-e5fa-54e5-9c91-2a83da33296a", 00:08:27.198 "is_configured": true, 00:08:27.198 "data_offset": 2048, 00:08:27.198 "data_size": 63488 00:08:27.198 }, 00:08:27.198 { 00:08:27.198 "name": "BaseBdev3", 00:08:27.198 "uuid": "8e54608f-aa7c-53a5-9131-129ca3422683", 00:08:27.198 "is_configured": true, 00:08:27.198 "data_offset": 2048, 00:08:27.198 "data_size": 63488 00:08:27.198 } 00:08:27.198 ] 00:08:27.198 }' 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.198 23:03:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.768 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:27.768 23:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:27.768 [2024-11-18 23:03:46.984852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:28.709 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.710 "name": "raid_bdev1", 00:08:28.710 "uuid": "a86062da-8991-45d2-bf5f-d278a22b1f04", 00:08:28.710 "strip_size_kb": 64, 00:08:28.710 "state": "online", 00:08:28.710 "raid_level": "raid0", 00:08:28.710 "superblock": true, 00:08:28.710 "num_base_bdevs": 3, 00:08:28.710 "num_base_bdevs_discovered": 3, 00:08:28.710 "num_base_bdevs_operational": 3, 00:08:28.710 "base_bdevs_list": [ 00:08:28.710 { 00:08:28.710 "name": "BaseBdev1", 00:08:28.710 "uuid": "df3832db-6ecd-52a3-939e-57362ce37dab", 00:08:28.710 "is_configured": true, 00:08:28.710 "data_offset": 2048, 00:08:28.710 "data_size": 63488 00:08:28.710 }, 00:08:28.710 { 00:08:28.710 "name": "BaseBdev2", 00:08:28.710 "uuid": "aa87c70b-e5fa-54e5-9c91-2a83da33296a", 00:08:28.710 "is_configured": true, 00:08:28.710 "data_offset": 2048, 00:08:28.710 "data_size": 63488 00:08:28.710 }, 00:08:28.710 { 00:08:28.710 "name": "BaseBdev3", 00:08:28.710 "uuid": "8e54608f-aa7c-53a5-9131-129ca3422683", 00:08:28.710 "is_configured": true, 00:08:28.710 "data_offset": 2048, 00:08:28.710 "data_size": 63488 00:08:28.710 } 00:08:28.710 ] 00:08:28.710 }' 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.710 23:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.969 23:03:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:28.969 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.969 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.969 [2024-11-18 23:03:48.328207] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.969 [2024-11-18 23:03:48.328326] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.969 [2024-11-18 23:03:48.330805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.969 [2024-11-18 23:03:48.330888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.969 [2024-11-18 23:03:48.330941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.969 [2024-11-18 23:03:48.330982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:28.969 { 00:08:28.969 "results": [ 00:08:28.969 { 00:08:28.969 "job": "raid_bdev1", 00:08:28.969 "core_mask": "0x1", 00:08:28.969 "workload": "randrw", 00:08:28.969 "percentage": 50, 00:08:28.969 "status": "finished", 00:08:28.969 "queue_depth": 1, 00:08:28.969 "io_size": 131072, 00:08:28.969 "runtime": 1.344332, 00:08:28.969 "iops": 17563.369762826445, 00:08:28.969 "mibps": 2195.4212203533057, 00:08:28.969 "io_failed": 1, 00:08:28.969 "io_timeout": 0, 00:08:28.969 "avg_latency_us": 78.84705859724941, 00:08:28.969 "min_latency_us": 21.016593886462882, 00:08:28.969 "max_latency_us": 1345.0620087336245 00:08:28.969 } 00:08:28.969 ], 00:08:28.969 "core_count": 1 00:08:28.969 } 00:08:28.969 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.969 23:03:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76462 00:08:28.969 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76462 ']' 00:08:28.969 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76462 00:08:28.969 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:28.969 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.969 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76462 00:08:29.229 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.229 killing process with pid 76462 00:08:29.229 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.229 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76462' 00:08:29.229 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76462 00:08:29.229 [2024-11-18 23:03:48.364009] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.229 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76462 00:08:29.229 [2024-11-18 23:03:48.388900] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.488 23:03:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.S4sXXRS87e 00:08:29.488 23:03:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:29.489 23:03:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:29.489 23:03:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:29.489 23:03:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:29.489 23:03:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:29.489 23:03:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:29.489 ************************************ 00:08:29.489 END TEST raid_read_error_test 00:08:29.489 ************************************ 00:08:29.489 23:03:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:29.489 00:08:29.489 real 0m3.245s 00:08:29.489 user 0m4.088s 00:08:29.489 sys 0m0.512s 00:08:29.489 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.489 23:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.489 23:03:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:29.489 23:03:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:29.489 23:03:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.489 23:03:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.489 ************************************ 00:08:29.489 START TEST raid_write_error_test 00:08:29.489 ************************************ 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ESWtyZhZJV 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76591 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76591 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76591 ']' 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.489 23:03:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.489 [2024-11-18 23:03:48.809745] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:29.489 [2024-11-18 23:03:48.809913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76591 ] 00:08:29.749 [2024-11-18 23:03:48.969415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.749 [2024-11-18 23:03:49.013985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.749 [2024-11-18 23:03:49.056326] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.749 [2024-11-18 23:03:49.056445] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.318 BaseBdev1_malloc 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.318 true 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.318 [2024-11-18 23:03:49.662454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:30.318 [2024-11-18 23:03:49.662501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.318 [2024-11-18 23:03:49.662534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:30.318 [2024-11-18 23:03:49.662543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.318 [2024-11-18 23:03:49.664627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.318 [2024-11-18 23:03:49.664730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:30.318 BaseBdev1 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.318 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.579 BaseBdev2_malloc 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.579 true 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.579 [2024-11-18 23:03:49.712415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:30.579 [2024-11-18 23:03:49.712460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.579 [2024-11-18 23:03:49.712478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:30.579 [2024-11-18 23:03:49.712486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.579 [2024-11-18 23:03:49.714536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.579 [2024-11-18 23:03:49.714619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:30.579 BaseBdev2 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.579 BaseBdev3_malloc 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.579 true 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.579 [2024-11-18 23:03:49.752958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:30.579 [2024-11-18 23:03:49.753000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.579 [2024-11-18 23:03:49.753032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:30.579 [2024-11-18 23:03:49.753041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.579 [2024-11-18 23:03:49.755049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.579 [2024-11-18 23:03:49.755084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:30.579 BaseBdev3 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.579 [2024-11-18 23:03:49.764995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.579 [2024-11-18 23:03:49.766852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.579 [2024-11-18 23:03:49.766978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.579 [2024-11-18 23:03:49.767144] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:30.579 [2024-11-18 23:03:49.767169] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:30.579 [2024-11-18 23:03:49.767449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:30.579 [2024-11-18 23:03:49.767584] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:30.579 [2024-11-18 23:03:49.767595] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:30.579 [2024-11-18 23:03:49.767714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.579 "name": "raid_bdev1", 00:08:30.579 "uuid": "d31c9949-2054-466a-bfeb-0fe8727e6708", 00:08:30.579 "strip_size_kb": 64, 00:08:30.579 "state": "online", 00:08:30.579 "raid_level": "raid0", 00:08:30.579 "superblock": true, 00:08:30.579 "num_base_bdevs": 3, 00:08:30.579 "num_base_bdevs_discovered": 3, 00:08:30.579 "num_base_bdevs_operational": 3, 00:08:30.579 "base_bdevs_list": [ 00:08:30.579 { 00:08:30.579 "name": "BaseBdev1", 00:08:30.579 "uuid": "5737a673-6bee-5bd6-90af-133dfdc6b865", 00:08:30.579 "is_configured": true, 00:08:30.579 "data_offset": 2048, 00:08:30.579 "data_size": 63488 00:08:30.579 }, 00:08:30.579 { 00:08:30.579 "name": "BaseBdev2", 00:08:30.579 "uuid": "a6b3ed6f-a22b-55ce-88ab-889fec7579d0", 00:08:30.579 "is_configured": true, 00:08:30.579 "data_offset": 2048, 00:08:30.579 "data_size": 63488 00:08:30.579 }, 00:08:30.579 { 00:08:30.579 "name": "BaseBdev3", 00:08:30.579 "uuid": "0d687ce4-0917-578f-9eac-767650994c86", 00:08:30.579 "is_configured": true, 00:08:30.579 "data_offset": 2048, 00:08:30.579 "data_size": 63488 00:08:30.579 } 00:08:30.579 ] 00:08:30.579 }' 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.579 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.147 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:31.147 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:31.147 [2024-11-18 23:03:50.312434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.086 "name": "raid_bdev1", 00:08:32.086 "uuid": "d31c9949-2054-466a-bfeb-0fe8727e6708", 00:08:32.086 "strip_size_kb": 64, 00:08:32.086 "state": "online", 00:08:32.086 "raid_level": "raid0", 00:08:32.086 "superblock": true, 00:08:32.086 "num_base_bdevs": 3, 00:08:32.086 "num_base_bdevs_discovered": 3, 00:08:32.086 "num_base_bdevs_operational": 3, 00:08:32.086 "base_bdevs_list": [ 00:08:32.086 { 00:08:32.086 "name": "BaseBdev1", 00:08:32.086 "uuid": "5737a673-6bee-5bd6-90af-133dfdc6b865", 00:08:32.086 "is_configured": true, 00:08:32.086 "data_offset": 2048, 00:08:32.086 "data_size": 63488 00:08:32.086 }, 00:08:32.086 { 00:08:32.086 "name": "BaseBdev2", 00:08:32.086 "uuid": "a6b3ed6f-a22b-55ce-88ab-889fec7579d0", 00:08:32.086 "is_configured": true, 00:08:32.086 "data_offset": 2048, 00:08:32.086 "data_size": 63488 00:08:32.086 }, 00:08:32.086 { 00:08:32.086 "name": "BaseBdev3", 00:08:32.086 "uuid": "0d687ce4-0917-578f-9eac-767650994c86", 00:08:32.086 "is_configured": true, 00:08:32.086 "data_offset": 2048, 00:08:32.086 "data_size": 63488 00:08:32.086 } 00:08:32.086 ] 00:08:32.086 }' 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.086 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.345 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:32.345 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.345 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.345 [2024-11-18 23:03:51.708115] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:32.345 [2024-11-18 23:03:51.708149] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.345 [2024-11-18 23:03:51.710648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.345 [2024-11-18 23:03:51.710748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.345 [2024-11-18 23:03:51.710789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.345 [2024-11-18 23:03:51.710800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:32.345 { 00:08:32.345 "results": [ 00:08:32.345 { 00:08:32.346 "job": "raid_bdev1", 00:08:32.346 "core_mask": "0x1", 00:08:32.346 "workload": "randrw", 00:08:32.346 "percentage": 50, 00:08:32.346 "status": "finished", 00:08:32.346 "queue_depth": 1, 00:08:32.346 "io_size": 131072, 00:08:32.346 "runtime": 1.396594, 00:08:32.346 "iops": 17516.90183403337, 00:08:32.346 "mibps": 2189.6127292541714, 00:08:32.346 "io_failed": 1, 00:08:32.346 "io_timeout": 0, 00:08:32.346 "avg_latency_us": 79.13382379426272, 00:08:32.346 "min_latency_us": 24.482096069868994, 00:08:32.346 "max_latency_us": 1366.5257641921398 00:08:32.346 } 00:08:32.346 ], 00:08:32.346 "core_count": 1 00:08:32.346 } 00:08:32.346 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.346 23:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76591 00:08:32.346 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76591 ']' 00:08:32.346 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76591 00:08:32.346 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:32.605 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.605 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76591 00:08:32.605 killing process with pid 76591 00:08:32.605 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.605 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.605 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76591' 00:08:32.605 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76591 00:08:32.605 [2024-11-18 23:03:51.758727] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.605 23:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76591 00:08:32.605 [2024-11-18 23:03:51.783076] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.865 23:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ESWtyZhZJV 00:08:32.865 23:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:32.865 23:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:32.865 23:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:32.865 23:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:32.865 23:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:32.865 23:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:32.865 23:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:32.865 00:08:32.865 real 0m3.315s 00:08:32.865 user 0m4.200s 00:08:32.865 sys 0m0.515s 00:08:32.865 23:03:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.865 23:03:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.865 ************************************ 00:08:32.865 END TEST raid_write_error_test 00:08:32.865 ************************************ 00:08:32.865 23:03:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:32.865 23:03:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:32.865 23:03:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:32.865 23:03:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.865 23:03:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.865 ************************************ 00:08:32.865 START TEST raid_state_function_test 00:08:32.865 ************************************ 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76729 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76729' 00:08:32.865 Process raid pid: 76729 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76729 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76729 ']' 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.865 23:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.865 [2024-11-18 23:03:52.180640] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:32.866 [2024-11-18 23:03:52.180778] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.125 [2024-11-18 23:03:52.341939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.125 [2024-11-18 23:03:52.386722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.125 [2024-11-18 23:03:52.428963] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.125 [2024-11-18 23:03:52.429081] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.696 23:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.696 23:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:33.696 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.696 23:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.696 23:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.696 [2024-11-18 23:03:53.002989] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.696 [2024-11-18 23:03:53.003088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.696 [2024-11-18 23:03:53.003132] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.696 [2024-11-18 23:03:53.003166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.696 [2024-11-18 23:03:53.003184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.696 [2024-11-18 23:03:53.003223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.696 "name": "Existed_Raid", 00:08:33.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.696 "strip_size_kb": 64, 00:08:33.696 "state": "configuring", 00:08:33.696 "raid_level": "concat", 00:08:33.696 "superblock": false, 00:08:33.696 "num_base_bdevs": 3, 00:08:33.696 "num_base_bdevs_discovered": 0, 00:08:33.696 "num_base_bdevs_operational": 3, 00:08:33.696 "base_bdevs_list": [ 00:08:33.696 { 00:08:33.696 "name": "BaseBdev1", 00:08:33.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.696 "is_configured": false, 00:08:33.696 "data_offset": 0, 00:08:33.696 "data_size": 0 00:08:33.696 }, 00:08:33.696 { 00:08:33.696 "name": "BaseBdev2", 00:08:33.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.696 "is_configured": false, 00:08:33.696 "data_offset": 0, 00:08:33.696 "data_size": 0 00:08:33.696 }, 00:08:33.696 { 00:08:33.696 "name": "BaseBdev3", 00:08:33.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.696 "is_configured": false, 00:08:33.696 "data_offset": 0, 00:08:33.696 "data_size": 0 00:08:33.696 } 00:08:33.696 ] 00:08:33.696 }' 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.696 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.358 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.358 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.358 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.358 [2024-11-18 23:03:53.422162] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.358 [2024-11-18 23:03:53.422202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:34.358 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.358 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.358 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.358 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.358 [2024-11-18 23:03:53.434170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.358 [2024-11-18 23:03:53.434210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.358 [2024-11-18 23:03:53.434218] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.358 [2024-11-18 23:03:53.434242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.358 [2024-11-18 23:03:53.434248] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.358 [2024-11-18 23:03:53.434256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.358 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.358 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:34.358 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.359 [2024-11-18 23:03:53.454973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.359 BaseBdev1 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.359 [ 00:08:34.359 { 00:08:34.359 "name": "BaseBdev1", 00:08:34.359 "aliases": [ 00:08:34.359 "3b39ea1a-0fb3-4f13-83dd-52e495a6882d" 00:08:34.359 ], 00:08:34.359 "product_name": "Malloc disk", 00:08:34.359 "block_size": 512, 00:08:34.359 "num_blocks": 65536, 00:08:34.359 "uuid": "3b39ea1a-0fb3-4f13-83dd-52e495a6882d", 00:08:34.359 "assigned_rate_limits": { 00:08:34.359 "rw_ios_per_sec": 0, 00:08:34.359 "rw_mbytes_per_sec": 0, 00:08:34.359 "r_mbytes_per_sec": 0, 00:08:34.359 "w_mbytes_per_sec": 0 00:08:34.359 }, 00:08:34.359 "claimed": true, 00:08:34.359 "claim_type": "exclusive_write", 00:08:34.359 "zoned": false, 00:08:34.359 "supported_io_types": { 00:08:34.359 "read": true, 00:08:34.359 "write": true, 00:08:34.359 "unmap": true, 00:08:34.359 "flush": true, 00:08:34.359 "reset": true, 00:08:34.359 "nvme_admin": false, 00:08:34.359 "nvme_io": false, 00:08:34.359 "nvme_io_md": false, 00:08:34.359 "write_zeroes": true, 00:08:34.359 "zcopy": true, 00:08:34.359 "get_zone_info": false, 00:08:34.359 "zone_management": false, 00:08:34.359 "zone_append": false, 00:08:34.359 "compare": false, 00:08:34.359 "compare_and_write": false, 00:08:34.359 "abort": true, 00:08:34.359 "seek_hole": false, 00:08:34.359 "seek_data": false, 00:08:34.359 "copy": true, 00:08:34.359 "nvme_iov_md": false 00:08:34.359 }, 00:08:34.359 "memory_domains": [ 00:08:34.359 { 00:08:34.359 "dma_device_id": "system", 00:08:34.359 "dma_device_type": 1 00:08:34.359 }, 00:08:34.359 { 00:08:34.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.359 "dma_device_type": 2 00:08:34.359 } 00:08:34.359 ], 00:08:34.359 "driver_specific": {} 00:08:34.359 } 00:08:34.359 ] 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.359 "name": "Existed_Raid", 00:08:34.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.359 "strip_size_kb": 64, 00:08:34.359 "state": "configuring", 00:08:34.359 "raid_level": "concat", 00:08:34.359 "superblock": false, 00:08:34.359 "num_base_bdevs": 3, 00:08:34.359 "num_base_bdevs_discovered": 1, 00:08:34.359 "num_base_bdevs_operational": 3, 00:08:34.359 "base_bdevs_list": [ 00:08:34.359 { 00:08:34.359 "name": "BaseBdev1", 00:08:34.359 "uuid": "3b39ea1a-0fb3-4f13-83dd-52e495a6882d", 00:08:34.359 "is_configured": true, 00:08:34.359 "data_offset": 0, 00:08:34.359 "data_size": 65536 00:08:34.359 }, 00:08:34.359 { 00:08:34.359 "name": "BaseBdev2", 00:08:34.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.359 "is_configured": false, 00:08:34.359 "data_offset": 0, 00:08:34.359 "data_size": 0 00:08:34.359 }, 00:08:34.359 { 00:08:34.359 "name": "BaseBdev3", 00:08:34.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.359 "is_configured": false, 00:08:34.359 "data_offset": 0, 00:08:34.359 "data_size": 0 00:08:34.359 } 00:08:34.359 ] 00:08:34.359 }' 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.359 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.619 [2024-11-18 23:03:53.894248] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.619 [2024-11-18 23:03:53.894309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.619 [2024-11-18 23:03:53.906267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.619 [2024-11-18 23:03:53.908155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.619 [2024-11-18 23:03:53.908243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.619 [2024-11-18 23:03:53.908257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.619 [2024-11-18 23:03:53.908269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.619 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.619 "name": "Existed_Raid", 00:08:34.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.619 "strip_size_kb": 64, 00:08:34.619 "state": "configuring", 00:08:34.619 "raid_level": "concat", 00:08:34.619 "superblock": false, 00:08:34.619 "num_base_bdevs": 3, 00:08:34.619 "num_base_bdevs_discovered": 1, 00:08:34.619 "num_base_bdevs_operational": 3, 00:08:34.619 "base_bdevs_list": [ 00:08:34.619 { 00:08:34.619 "name": "BaseBdev1", 00:08:34.619 "uuid": "3b39ea1a-0fb3-4f13-83dd-52e495a6882d", 00:08:34.619 "is_configured": true, 00:08:34.619 "data_offset": 0, 00:08:34.620 "data_size": 65536 00:08:34.620 }, 00:08:34.620 { 00:08:34.620 "name": "BaseBdev2", 00:08:34.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.620 "is_configured": false, 00:08:34.620 "data_offset": 0, 00:08:34.620 "data_size": 0 00:08:34.620 }, 00:08:34.620 { 00:08:34.620 "name": "BaseBdev3", 00:08:34.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.620 "is_configured": false, 00:08:34.620 "data_offset": 0, 00:08:34.620 "data_size": 0 00:08:34.620 } 00:08:34.620 ] 00:08:34.620 }' 00:08:34.620 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.620 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.879 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.879 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.879 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 [2024-11-18 23:03:54.280137] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.140 BaseBdev2 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 [ 00:08:35.140 { 00:08:35.140 "name": "BaseBdev2", 00:08:35.140 "aliases": [ 00:08:35.140 "4cd6bdb9-980a-473e-a67a-e4569cb709b5" 00:08:35.140 ], 00:08:35.140 "product_name": "Malloc disk", 00:08:35.140 "block_size": 512, 00:08:35.140 "num_blocks": 65536, 00:08:35.140 "uuid": "4cd6bdb9-980a-473e-a67a-e4569cb709b5", 00:08:35.140 "assigned_rate_limits": { 00:08:35.140 "rw_ios_per_sec": 0, 00:08:35.140 "rw_mbytes_per_sec": 0, 00:08:35.140 "r_mbytes_per_sec": 0, 00:08:35.140 "w_mbytes_per_sec": 0 00:08:35.140 }, 00:08:35.140 "claimed": true, 00:08:35.140 "claim_type": "exclusive_write", 00:08:35.140 "zoned": false, 00:08:35.140 "supported_io_types": { 00:08:35.140 "read": true, 00:08:35.140 "write": true, 00:08:35.140 "unmap": true, 00:08:35.140 "flush": true, 00:08:35.140 "reset": true, 00:08:35.140 "nvme_admin": false, 00:08:35.140 "nvme_io": false, 00:08:35.140 "nvme_io_md": false, 00:08:35.140 "write_zeroes": true, 00:08:35.140 "zcopy": true, 00:08:35.140 "get_zone_info": false, 00:08:35.140 "zone_management": false, 00:08:35.140 "zone_append": false, 00:08:35.140 "compare": false, 00:08:35.140 "compare_and_write": false, 00:08:35.140 "abort": true, 00:08:35.140 "seek_hole": false, 00:08:35.140 "seek_data": false, 00:08:35.140 "copy": true, 00:08:35.140 "nvme_iov_md": false 00:08:35.140 }, 00:08:35.140 "memory_domains": [ 00:08:35.140 { 00:08:35.140 "dma_device_id": "system", 00:08:35.140 "dma_device_type": 1 00:08:35.140 }, 00:08:35.140 { 00:08:35.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.140 "dma_device_type": 2 00:08:35.140 } 00:08:35.140 ], 00:08:35.140 "driver_specific": {} 00:08:35.140 } 00:08:35.140 ] 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.140 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.140 "name": "Existed_Raid", 00:08:35.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.140 "strip_size_kb": 64, 00:08:35.140 "state": "configuring", 00:08:35.140 "raid_level": "concat", 00:08:35.140 "superblock": false, 00:08:35.140 "num_base_bdevs": 3, 00:08:35.140 "num_base_bdevs_discovered": 2, 00:08:35.140 "num_base_bdevs_operational": 3, 00:08:35.140 "base_bdevs_list": [ 00:08:35.140 { 00:08:35.140 "name": "BaseBdev1", 00:08:35.140 "uuid": "3b39ea1a-0fb3-4f13-83dd-52e495a6882d", 00:08:35.141 "is_configured": true, 00:08:35.141 "data_offset": 0, 00:08:35.141 "data_size": 65536 00:08:35.141 }, 00:08:35.141 { 00:08:35.141 "name": "BaseBdev2", 00:08:35.141 "uuid": "4cd6bdb9-980a-473e-a67a-e4569cb709b5", 00:08:35.141 "is_configured": true, 00:08:35.141 "data_offset": 0, 00:08:35.141 "data_size": 65536 00:08:35.141 }, 00:08:35.141 { 00:08:35.141 "name": "BaseBdev3", 00:08:35.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.141 "is_configured": false, 00:08:35.141 "data_offset": 0, 00:08:35.141 "data_size": 0 00:08:35.141 } 00:08:35.141 ] 00:08:35.141 }' 00:08:35.141 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.141 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.401 [2024-11-18 23:03:54.726349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.401 [2024-11-18 23:03:54.726388] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:35.401 [2024-11-18 23:03:54.726398] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:35.401 [2024-11-18 23:03:54.726687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:35.401 [2024-11-18 23:03:54.726818] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:35.401 [2024-11-18 23:03:54.726835] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:35.401 [2024-11-18 23:03:54.727016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.401 BaseBdev3 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.401 [ 00:08:35.401 { 00:08:35.401 "name": "BaseBdev3", 00:08:35.401 "aliases": [ 00:08:35.401 "563f5706-6137-4035-a3a3-7d41e1e5b9f5" 00:08:35.401 ], 00:08:35.401 "product_name": "Malloc disk", 00:08:35.401 "block_size": 512, 00:08:35.401 "num_blocks": 65536, 00:08:35.401 "uuid": "563f5706-6137-4035-a3a3-7d41e1e5b9f5", 00:08:35.401 "assigned_rate_limits": { 00:08:35.401 "rw_ios_per_sec": 0, 00:08:35.401 "rw_mbytes_per_sec": 0, 00:08:35.401 "r_mbytes_per_sec": 0, 00:08:35.401 "w_mbytes_per_sec": 0 00:08:35.401 }, 00:08:35.401 "claimed": true, 00:08:35.401 "claim_type": "exclusive_write", 00:08:35.401 "zoned": false, 00:08:35.401 "supported_io_types": { 00:08:35.401 "read": true, 00:08:35.401 "write": true, 00:08:35.401 "unmap": true, 00:08:35.401 "flush": true, 00:08:35.401 "reset": true, 00:08:35.401 "nvme_admin": false, 00:08:35.401 "nvme_io": false, 00:08:35.401 "nvme_io_md": false, 00:08:35.401 "write_zeroes": true, 00:08:35.401 "zcopy": true, 00:08:35.401 "get_zone_info": false, 00:08:35.401 "zone_management": false, 00:08:35.401 "zone_append": false, 00:08:35.401 "compare": false, 00:08:35.401 "compare_and_write": false, 00:08:35.401 "abort": true, 00:08:35.401 "seek_hole": false, 00:08:35.401 "seek_data": false, 00:08:35.401 "copy": true, 00:08:35.401 "nvme_iov_md": false 00:08:35.401 }, 00:08:35.401 "memory_domains": [ 00:08:35.401 { 00:08:35.401 "dma_device_id": "system", 00:08:35.401 "dma_device_type": 1 00:08:35.401 }, 00:08:35.401 { 00:08:35.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.401 "dma_device_type": 2 00:08:35.401 } 00:08:35.401 ], 00:08:35.401 "driver_specific": {} 00:08:35.401 } 00:08:35.401 ] 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.401 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.661 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.661 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.661 "name": "Existed_Raid", 00:08:35.661 "uuid": "2865eb6f-1382-4c0c-931f-d4da20de36b4", 00:08:35.661 "strip_size_kb": 64, 00:08:35.661 "state": "online", 00:08:35.661 "raid_level": "concat", 00:08:35.661 "superblock": false, 00:08:35.661 "num_base_bdevs": 3, 00:08:35.661 "num_base_bdevs_discovered": 3, 00:08:35.661 "num_base_bdevs_operational": 3, 00:08:35.661 "base_bdevs_list": [ 00:08:35.661 { 00:08:35.661 "name": "BaseBdev1", 00:08:35.661 "uuid": "3b39ea1a-0fb3-4f13-83dd-52e495a6882d", 00:08:35.661 "is_configured": true, 00:08:35.661 "data_offset": 0, 00:08:35.661 "data_size": 65536 00:08:35.661 }, 00:08:35.661 { 00:08:35.661 "name": "BaseBdev2", 00:08:35.661 "uuid": "4cd6bdb9-980a-473e-a67a-e4569cb709b5", 00:08:35.661 "is_configured": true, 00:08:35.661 "data_offset": 0, 00:08:35.661 "data_size": 65536 00:08:35.661 }, 00:08:35.661 { 00:08:35.661 "name": "BaseBdev3", 00:08:35.661 "uuid": "563f5706-6137-4035-a3a3-7d41e1e5b9f5", 00:08:35.661 "is_configured": true, 00:08:35.661 "data_offset": 0, 00:08:35.661 "data_size": 65536 00:08:35.661 } 00:08:35.661 ] 00:08:35.661 }' 00:08:35.661 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.661 23:03:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.921 [2024-11-18 23:03:55.217829] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.921 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.921 "name": "Existed_Raid", 00:08:35.921 "aliases": [ 00:08:35.921 "2865eb6f-1382-4c0c-931f-d4da20de36b4" 00:08:35.921 ], 00:08:35.921 "product_name": "Raid Volume", 00:08:35.921 "block_size": 512, 00:08:35.921 "num_blocks": 196608, 00:08:35.921 "uuid": "2865eb6f-1382-4c0c-931f-d4da20de36b4", 00:08:35.921 "assigned_rate_limits": { 00:08:35.921 "rw_ios_per_sec": 0, 00:08:35.921 "rw_mbytes_per_sec": 0, 00:08:35.921 "r_mbytes_per_sec": 0, 00:08:35.921 "w_mbytes_per_sec": 0 00:08:35.921 }, 00:08:35.921 "claimed": false, 00:08:35.921 "zoned": false, 00:08:35.921 "supported_io_types": { 00:08:35.921 "read": true, 00:08:35.921 "write": true, 00:08:35.921 "unmap": true, 00:08:35.921 "flush": true, 00:08:35.921 "reset": true, 00:08:35.921 "nvme_admin": false, 00:08:35.921 "nvme_io": false, 00:08:35.921 "nvme_io_md": false, 00:08:35.921 "write_zeroes": true, 00:08:35.921 "zcopy": false, 00:08:35.921 "get_zone_info": false, 00:08:35.921 "zone_management": false, 00:08:35.921 "zone_append": false, 00:08:35.921 "compare": false, 00:08:35.921 "compare_and_write": false, 00:08:35.921 "abort": false, 00:08:35.921 "seek_hole": false, 00:08:35.921 "seek_data": false, 00:08:35.921 "copy": false, 00:08:35.921 "nvme_iov_md": false 00:08:35.921 }, 00:08:35.921 "memory_domains": [ 00:08:35.921 { 00:08:35.921 "dma_device_id": "system", 00:08:35.921 "dma_device_type": 1 00:08:35.921 }, 00:08:35.921 { 00:08:35.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.921 "dma_device_type": 2 00:08:35.921 }, 00:08:35.921 { 00:08:35.921 "dma_device_id": "system", 00:08:35.921 "dma_device_type": 1 00:08:35.921 }, 00:08:35.921 { 00:08:35.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.921 "dma_device_type": 2 00:08:35.921 }, 00:08:35.921 { 00:08:35.921 "dma_device_id": "system", 00:08:35.921 "dma_device_type": 1 00:08:35.921 }, 00:08:35.921 { 00:08:35.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.921 "dma_device_type": 2 00:08:35.921 } 00:08:35.921 ], 00:08:35.921 "driver_specific": { 00:08:35.921 "raid": { 00:08:35.922 "uuid": "2865eb6f-1382-4c0c-931f-d4da20de36b4", 00:08:35.922 "strip_size_kb": 64, 00:08:35.922 "state": "online", 00:08:35.922 "raid_level": "concat", 00:08:35.922 "superblock": false, 00:08:35.922 "num_base_bdevs": 3, 00:08:35.922 "num_base_bdevs_discovered": 3, 00:08:35.922 "num_base_bdevs_operational": 3, 00:08:35.922 "base_bdevs_list": [ 00:08:35.922 { 00:08:35.922 "name": "BaseBdev1", 00:08:35.922 "uuid": "3b39ea1a-0fb3-4f13-83dd-52e495a6882d", 00:08:35.922 "is_configured": true, 00:08:35.922 "data_offset": 0, 00:08:35.922 "data_size": 65536 00:08:35.922 }, 00:08:35.922 { 00:08:35.922 "name": "BaseBdev2", 00:08:35.922 "uuid": "4cd6bdb9-980a-473e-a67a-e4569cb709b5", 00:08:35.922 "is_configured": true, 00:08:35.922 "data_offset": 0, 00:08:35.922 "data_size": 65536 00:08:35.922 }, 00:08:35.922 { 00:08:35.922 "name": "BaseBdev3", 00:08:35.922 "uuid": "563f5706-6137-4035-a3a3-7d41e1e5b9f5", 00:08:35.922 "is_configured": true, 00:08:35.922 "data_offset": 0, 00:08:35.922 "data_size": 65536 00:08:35.922 } 00:08:35.922 ] 00:08:35.922 } 00:08:35.922 } 00:08:35.922 }' 00:08:35.922 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:36.182 BaseBdev2 00:08:36.182 BaseBdev3' 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.182 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.183 [2024-11-18 23:03:55.461169] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.183 [2024-11-18 23:03:55.461195] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.183 [2024-11-18 23:03:55.461242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.183 "name": "Existed_Raid", 00:08:36.183 "uuid": "2865eb6f-1382-4c0c-931f-d4da20de36b4", 00:08:36.183 "strip_size_kb": 64, 00:08:36.183 "state": "offline", 00:08:36.183 "raid_level": "concat", 00:08:36.183 "superblock": false, 00:08:36.183 "num_base_bdevs": 3, 00:08:36.183 "num_base_bdevs_discovered": 2, 00:08:36.183 "num_base_bdevs_operational": 2, 00:08:36.183 "base_bdevs_list": [ 00:08:36.183 { 00:08:36.183 "name": null, 00:08:36.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.183 "is_configured": false, 00:08:36.183 "data_offset": 0, 00:08:36.183 "data_size": 65536 00:08:36.183 }, 00:08:36.183 { 00:08:36.183 "name": "BaseBdev2", 00:08:36.183 "uuid": "4cd6bdb9-980a-473e-a67a-e4569cb709b5", 00:08:36.183 "is_configured": true, 00:08:36.183 "data_offset": 0, 00:08:36.183 "data_size": 65536 00:08:36.183 }, 00:08:36.183 { 00:08:36.183 "name": "BaseBdev3", 00:08:36.183 "uuid": "563f5706-6137-4035-a3a3-7d41e1e5b9f5", 00:08:36.183 "is_configured": true, 00:08:36.183 "data_offset": 0, 00:08:36.183 "data_size": 65536 00:08:36.183 } 00:08:36.183 ] 00:08:36.183 }' 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.183 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.753 [2024-11-18 23:03:55.947560] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.753 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.753 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.753 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.753 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:36.753 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.753 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 [2024-11-18 23:03:56.010624] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:36.754 [2024-11-18 23:03:56.010668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 BaseBdev2 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 [ 00:08:36.754 { 00:08:36.754 "name": "BaseBdev2", 00:08:36.754 "aliases": [ 00:08:36.754 "3037589c-87a9-4eb0-ae4b-8a0ae7cfca03" 00:08:36.754 ], 00:08:36.754 "product_name": "Malloc disk", 00:08:36.754 "block_size": 512, 00:08:36.754 "num_blocks": 65536, 00:08:36.754 "uuid": "3037589c-87a9-4eb0-ae4b-8a0ae7cfca03", 00:08:36.754 "assigned_rate_limits": { 00:08:36.754 "rw_ios_per_sec": 0, 00:08:36.754 "rw_mbytes_per_sec": 0, 00:08:36.754 "r_mbytes_per_sec": 0, 00:08:36.754 "w_mbytes_per_sec": 0 00:08:36.754 }, 00:08:36.754 "claimed": false, 00:08:36.754 "zoned": false, 00:08:36.754 "supported_io_types": { 00:08:36.754 "read": true, 00:08:36.754 "write": true, 00:08:36.754 "unmap": true, 00:08:36.754 "flush": true, 00:08:36.754 "reset": true, 00:08:36.754 "nvme_admin": false, 00:08:36.754 "nvme_io": false, 00:08:36.754 "nvme_io_md": false, 00:08:36.754 "write_zeroes": true, 00:08:36.754 "zcopy": true, 00:08:36.754 "get_zone_info": false, 00:08:36.754 "zone_management": false, 00:08:36.754 "zone_append": false, 00:08:36.754 "compare": false, 00:08:36.754 "compare_and_write": false, 00:08:36.754 "abort": true, 00:08:36.754 "seek_hole": false, 00:08:36.754 "seek_data": false, 00:08:36.754 "copy": true, 00:08:36.754 "nvme_iov_md": false 00:08:36.754 }, 00:08:36.754 "memory_domains": [ 00:08:36.754 { 00:08:36.754 "dma_device_id": "system", 00:08:36.754 "dma_device_type": 1 00:08:36.754 }, 00:08:36.754 { 00:08:36.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.754 "dma_device_type": 2 00:08:36.754 } 00:08:36.754 ], 00:08:36.754 "driver_specific": {} 00:08:36.754 } 00:08:36.754 ] 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.754 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.014 BaseBdev3 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.014 [ 00:08:37.014 { 00:08:37.014 "name": "BaseBdev3", 00:08:37.014 "aliases": [ 00:08:37.014 "072cdd84-56a9-475b-bb76-f6f9ca7547f5" 00:08:37.014 ], 00:08:37.014 "product_name": "Malloc disk", 00:08:37.014 "block_size": 512, 00:08:37.014 "num_blocks": 65536, 00:08:37.014 "uuid": "072cdd84-56a9-475b-bb76-f6f9ca7547f5", 00:08:37.014 "assigned_rate_limits": { 00:08:37.014 "rw_ios_per_sec": 0, 00:08:37.014 "rw_mbytes_per_sec": 0, 00:08:37.014 "r_mbytes_per_sec": 0, 00:08:37.014 "w_mbytes_per_sec": 0 00:08:37.014 }, 00:08:37.014 "claimed": false, 00:08:37.014 "zoned": false, 00:08:37.014 "supported_io_types": { 00:08:37.014 "read": true, 00:08:37.014 "write": true, 00:08:37.014 "unmap": true, 00:08:37.014 "flush": true, 00:08:37.014 "reset": true, 00:08:37.014 "nvme_admin": false, 00:08:37.014 "nvme_io": false, 00:08:37.014 "nvme_io_md": false, 00:08:37.014 "write_zeroes": true, 00:08:37.014 "zcopy": true, 00:08:37.014 "get_zone_info": false, 00:08:37.014 "zone_management": false, 00:08:37.014 "zone_append": false, 00:08:37.014 "compare": false, 00:08:37.014 "compare_and_write": false, 00:08:37.014 "abort": true, 00:08:37.014 "seek_hole": false, 00:08:37.014 "seek_data": false, 00:08:37.014 "copy": true, 00:08:37.014 "nvme_iov_md": false 00:08:37.014 }, 00:08:37.014 "memory_domains": [ 00:08:37.014 { 00:08:37.014 "dma_device_id": "system", 00:08:37.014 "dma_device_type": 1 00:08:37.014 }, 00:08:37.014 { 00:08:37.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.014 "dma_device_type": 2 00:08:37.014 } 00:08:37.014 ], 00:08:37.014 "driver_specific": {} 00:08:37.014 } 00:08:37.014 ] 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.014 [2024-11-18 23:03:56.181085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.014 [2024-11-18 23:03:56.181183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.014 [2024-11-18 23:03:56.181223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.014 [2024-11-18 23:03:56.182996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.014 "name": "Existed_Raid", 00:08:37.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.014 "strip_size_kb": 64, 00:08:37.014 "state": "configuring", 00:08:37.014 "raid_level": "concat", 00:08:37.014 "superblock": false, 00:08:37.014 "num_base_bdevs": 3, 00:08:37.014 "num_base_bdevs_discovered": 2, 00:08:37.014 "num_base_bdevs_operational": 3, 00:08:37.014 "base_bdevs_list": [ 00:08:37.014 { 00:08:37.014 "name": "BaseBdev1", 00:08:37.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.014 "is_configured": false, 00:08:37.014 "data_offset": 0, 00:08:37.014 "data_size": 0 00:08:37.014 }, 00:08:37.014 { 00:08:37.014 "name": "BaseBdev2", 00:08:37.014 "uuid": "3037589c-87a9-4eb0-ae4b-8a0ae7cfca03", 00:08:37.014 "is_configured": true, 00:08:37.014 "data_offset": 0, 00:08:37.014 "data_size": 65536 00:08:37.014 }, 00:08:37.014 { 00:08:37.014 "name": "BaseBdev3", 00:08:37.014 "uuid": "072cdd84-56a9-475b-bb76-f6f9ca7547f5", 00:08:37.014 "is_configured": true, 00:08:37.014 "data_offset": 0, 00:08:37.014 "data_size": 65536 00:08:37.014 } 00:08:37.014 ] 00:08:37.014 }' 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.014 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.273 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:37.273 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.273 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.273 [2024-11-18 23:03:56.640294] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.273 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.274 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.274 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.274 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.274 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.274 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.274 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.274 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.274 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.274 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.274 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.533 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.533 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.533 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.533 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.533 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.533 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.533 "name": "Existed_Raid", 00:08:37.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.533 "strip_size_kb": 64, 00:08:37.533 "state": "configuring", 00:08:37.533 "raid_level": "concat", 00:08:37.533 "superblock": false, 00:08:37.533 "num_base_bdevs": 3, 00:08:37.533 "num_base_bdevs_discovered": 1, 00:08:37.533 "num_base_bdevs_operational": 3, 00:08:37.533 "base_bdevs_list": [ 00:08:37.533 { 00:08:37.533 "name": "BaseBdev1", 00:08:37.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.533 "is_configured": false, 00:08:37.533 "data_offset": 0, 00:08:37.533 "data_size": 0 00:08:37.533 }, 00:08:37.533 { 00:08:37.533 "name": null, 00:08:37.533 "uuid": "3037589c-87a9-4eb0-ae4b-8a0ae7cfca03", 00:08:37.533 "is_configured": false, 00:08:37.533 "data_offset": 0, 00:08:37.533 "data_size": 65536 00:08:37.533 }, 00:08:37.533 { 00:08:37.533 "name": "BaseBdev3", 00:08:37.533 "uuid": "072cdd84-56a9-475b-bb76-f6f9ca7547f5", 00:08:37.533 "is_configured": true, 00:08:37.533 "data_offset": 0, 00:08:37.533 "data_size": 65536 00:08:37.533 } 00:08:37.533 ] 00:08:37.533 }' 00:08:37.533 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.533 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.793 [2024-11-18 23:03:57.162315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.793 BaseBdev1 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.793 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.053 [ 00:08:38.053 { 00:08:38.053 "name": "BaseBdev1", 00:08:38.053 "aliases": [ 00:08:38.053 "3ae2a967-0577-46d5-9966-2d5f9530e48f" 00:08:38.053 ], 00:08:38.053 "product_name": "Malloc disk", 00:08:38.053 "block_size": 512, 00:08:38.053 "num_blocks": 65536, 00:08:38.053 "uuid": "3ae2a967-0577-46d5-9966-2d5f9530e48f", 00:08:38.053 "assigned_rate_limits": { 00:08:38.053 "rw_ios_per_sec": 0, 00:08:38.053 "rw_mbytes_per_sec": 0, 00:08:38.053 "r_mbytes_per_sec": 0, 00:08:38.053 "w_mbytes_per_sec": 0 00:08:38.053 }, 00:08:38.053 "claimed": true, 00:08:38.053 "claim_type": "exclusive_write", 00:08:38.053 "zoned": false, 00:08:38.053 "supported_io_types": { 00:08:38.053 "read": true, 00:08:38.053 "write": true, 00:08:38.053 "unmap": true, 00:08:38.053 "flush": true, 00:08:38.053 "reset": true, 00:08:38.053 "nvme_admin": false, 00:08:38.053 "nvme_io": false, 00:08:38.053 "nvme_io_md": false, 00:08:38.053 "write_zeroes": true, 00:08:38.053 "zcopy": true, 00:08:38.053 "get_zone_info": false, 00:08:38.053 "zone_management": false, 00:08:38.053 "zone_append": false, 00:08:38.053 "compare": false, 00:08:38.053 "compare_and_write": false, 00:08:38.053 "abort": true, 00:08:38.053 "seek_hole": false, 00:08:38.053 "seek_data": false, 00:08:38.053 "copy": true, 00:08:38.053 "nvme_iov_md": false 00:08:38.053 }, 00:08:38.053 "memory_domains": [ 00:08:38.053 { 00:08:38.053 "dma_device_id": "system", 00:08:38.053 "dma_device_type": 1 00:08:38.053 }, 00:08:38.053 { 00:08:38.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.053 "dma_device_type": 2 00:08:38.053 } 00:08:38.053 ], 00:08:38.053 "driver_specific": {} 00:08:38.053 } 00:08:38.053 ] 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.053 "name": "Existed_Raid", 00:08:38.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.053 "strip_size_kb": 64, 00:08:38.053 "state": "configuring", 00:08:38.053 "raid_level": "concat", 00:08:38.053 "superblock": false, 00:08:38.053 "num_base_bdevs": 3, 00:08:38.053 "num_base_bdevs_discovered": 2, 00:08:38.053 "num_base_bdevs_operational": 3, 00:08:38.053 "base_bdevs_list": [ 00:08:38.053 { 00:08:38.053 "name": "BaseBdev1", 00:08:38.053 "uuid": "3ae2a967-0577-46d5-9966-2d5f9530e48f", 00:08:38.053 "is_configured": true, 00:08:38.053 "data_offset": 0, 00:08:38.053 "data_size": 65536 00:08:38.053 }, 00:08:38.053 { 00:08:38.053 "name": null, 00:08:38.053 "uuid": "3037589c-87a9-4eb0-ae4b-8a0ae7cfca03", 00:08:38.053 "is_configured": false, 00:08:38.053 "data_offset": 0, 00:08:38.053 "data_size": 65536 00:08:38.053 }, 00:08:38.053 { 00:08:38.053 "name": "BaseBdev3", 00:08:38.053 "uuid": "072cdd84-56a9-475b-bb76-f6f9ca7547f5", 00:08:38.053 "is_configured": true, 00:08:38.053 "data_offset": 0, 00:08:38.053 "data_size": 65536 00:08:38.053 } 00:08:38.053 ] 00:08:38.053 }' 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.053 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.313 [2024-11-18 23:03:57.649509] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.313 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.314 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.314 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.314 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.314 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.574 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.574 "name": "Existed_Raid", 00:08:38.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.574 "strip_size_kb": 64, 00:08:38.574 "state": "configuring", 00:08:38.574 "raid_level": "concat", 00:08:38.574 "superblock": false, 00:08:38.574 "num_base_bdevs": 3, 00:08:38.574 "num_base_bdevs_discovered": 1, 00:08:38.575 "num_base_bdevs_operational": 3, 00:08:38.575 "base_bdevs_list": [ 00:08:38.575 { 00:08:38.575 "name": "BaseBdev1", 00:08:38.575 "uuid": "3ae2a967-0577-46d5-9966-2d5f9530e48f", 00:08:38.575 "is_configured": true, 00:08:38.575 "data_offset": 0, 00:08:38.575 "data_size": 65536 00:08:38.575 }, 00:08:38.575 { 00:08:38.575 "name": null, 00:08:38.575 "uuid": "3037589c-87a9-4eb0-ae4b-8a0ae7cfca03", 00:08:38.575 "is_configured": false, 00:08:38.575 "data_offset": 0, 00:08:38.575 "data_size": 65536 00:08:38.575 }, 00:08:38.575 { 00:08:38.575 "name": null, 00:08:38.575 "uuid": "072cdd84-56a9-475b-bb76-f6f9ca7547f5", 00:08:38.575 "is_configured": false, 00:08:38.575 "data_offset": 0, 00:08:38.575 "data_size": 65536 00:08:38.575 } 00:08:38.575 ] 00:08:38.575 }' 00:08:38.575 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.575 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.833 [2024-11-18 23:03:58.104754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.833 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.833 "name": "Existed_Raid", 00:08:38.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.833 "strip_size_kb": 64, 00:08:38.833 "state": "configuring", 00:08:38.833 "raid_level": "concat", 00:08:38.833 "superblock": false, 00:08:38.833 "num_base_bdevs": 3, 00:08:38.833 "num_base_bdevs_discovered": 2, 00:08:38.833 "num_base_bdevs_operational": 3, 00:08:38.833 "base_bdevs_list": [ 00:08:38.833 { 00:08:38.834 "name": "BaseBdev1", 00:08:38.834 "uuid": "3ae2a967-0577-46d5-9966-2d5f9530e48f", 00:08:38.834 "is_configured": true, 00:08:38.834 "data_offset": 0, 00:08:38.834 "data_size": 65536 00:08:38.834 }, 00:08:38.834 { 00:08:38.834 "name": null, 00:08:38.834 "uuid": "3037589c-87a9-4eb0-ae4b-8a0ae7cfca03", 00:08:38.834 "is_configured": false, 00:08:38.834 "data_offset": 0, 00:08:38.834 "data_size": 65536 00:08:38.834 }, 00:08:38.834 { 00:08:38.834 "name": "BaseBdev3", 00:08:38.834 "uuid": "072cdd84-56a9-475b-bb76-f6f9ca7547f5", 00:08:38.834 "is_configured": true, 00:08:38.834 "data_offset": 0, 00:08:38.834 "data_size": 65536 00:08:38.834 } 00:08:38.834 ] 00:08:38.834 }' 00:08:38.834 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.834 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.402 [2024-11-18 23:03:58.615871] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.402 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.402 "name": "Existed_Raid", 00:08:39.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.403 "strip_size_kb": 64, 00:08:39.403 "state": "configuring", 00:08:39.403 "raid_level": "concat", 00:08:39.403 "superblock": false, 00:08:39.403 "num_base_bdevs": 3, 00:08:39.403 "num_base_bdevs_discovered": 1, 00:08:39.403 "num_base_bdevs_operational": 3, 00:08:39.403 "base_bdevs_list": [ 00:08:39.403 { 00:08:39.403 "name": null, 00:08:39.403 "uuid": "3ae2a967-0577-46d5-9966-2d5f9530e48f", 00:08:39.403 "is_configured": false, 00:08:39.403 "data_offset": 0, 00:08:39.403 "data_size": 65536 00:08:39.403 }, 00:08:39.403 { 00:08:39.403 "name": null, 00:08:39.403 "uuid": "3037589c-87a9-4eb0-ae4b-8a0ae7cfca03", 00:08:39.403 "is_configured": false, 00:08:39.403 "data_offset": 0, 00:08:39.403 "data_size": 65536 00:08:39.403 }, 00:08:39.403 { 00:08:39.403 "name": "BaseBdev3", 00:08:39.403 "uuid": "072cdd84-56a9-475b-bb76-f6f9ca7547f5", 00:08:39.403 "is_configured": true, 00:08:39.403 "data_offset": 0, 00:08:39.403 "data_size": 65536 00:08:39.403 } 00:08:39.403 ] 00:08:39.403 }' 00:08:39.403 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.403 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.662 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.662 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.662 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.662 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.662 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.923 [2024-11-18 23:03:59.045520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.923 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.924 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.924 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.924 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.924 "name": "Existed_Raid", 00:08:39.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.924 "strip_size_kb": 64, 00:08:39.924 "state": "configuring", 00:08:39.924 "raid_level": "concat", 00:08:39.924 "superblock": false, 00:08:39.924 "num_base_bdevs": 3, 00:08:39.924 "num_base_bdevs_discovered": 2, 00:08:39.924 "num_base_bdevs_operational": 3, 00:08:39.924 "base_bdevs_list": [ 00:08:39.924 { 00:08:39.924 "name": null, 00:08:39.924 "uuid": "3ae2a967-0577-46d5-9966-2d5f9530e48f", 00:08:39.924 "is_configured": false, 00:08:39.924 "data_offset": 0, 00:08:39.924 "data_size": 65536 00:08:39.924 }, 00:08:39.924 { 00:08:39.924 "name": "BaseBdev2", 00:08:39.924 "uuid": "3037589c-87a9-4eb0-ae4b-8a0ae7cfca03", 00:08:39.924 "is_configured": true, 00:08:39.924 "data_offset": 0, 00:08:39.924 "data_size": 65536 00:08:39.924 }, 00:08:39.924 { 00:08:39.924 "name": "BaseBdev3", 00:08:39.924 "uuid": "072cdd84-56a9-475b-bb76-f6f9ca7547f5", 00:08:39.924 "is_configured": true, 00:08:39.924 "data_offset": 0, 00:08:39.924 "data_size": 65536 00:08:39.924 } 00:08:39.924 ] 00:08:39.924 }' 00:08:39.924 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.924 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3ae2a967-0577-46d5-9966-2d5f9530e48f 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.183 [2024-11-18 23:03:59.555513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:40.183 [2024-11-18 23:03:59.555613] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:40.183 [2024-11-18 23:03:59.555640] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:40.183 [2024-11-18 23:03:59.555930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:40.183 [2024-11-18 23:03:59.556090] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:40.183 [2024-11-18 23:03:59.556132] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:40.183 [2024-11-18 23:03:59.556361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.183 NewBaseBdev 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.183 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.443 [ 00:08:40.443 { 00:08:40.443 "name": "NewBaseBdev", 00:08:40.443 "aliases": [ 00:08:40.443 "3ae2a967-0577-46d5-9966-2d5f9530e48f" 00:08:40.443 ], 00:08:40.443 "product_name": "Malloc disk", 00:08:40.443 "block_size": 512, 00:08:40.443 "num_blocks": 65536, 00:08:40.443 "uuid": "3ae2a967-0577-46d5-9966-2d5f9530e48f", 00:08:40.443 "assigned_rate_limits": { 00:08:40.443 "rw_ios_per_sec": 0, 00:08:40.443 "rw_mbytes_per_sec": 0, 00:08:40.443 "r_mbytes_per_sec": 0, 00:08:40.443 "w_mbytes_per_sec": 0 00:08:40.443 }, 00:08:40.443 "claimed": true, 00:08:40.443 "claim_type": "exclusive_write", 00:08:40.443 "zoned": false, 00:08:40.443 "supported_io_types": { 00:08:40.443 "read": true, 00:08:40.443 "write": true, 00:08:40.443 "unmap": true, 00:08:40.443 "flush": true, 00:08:40.443 "reset": true, 00:08:40.443 "nvme_admin": false, 00:08:40.443 "nvme_io": false, 00:08:40.443 "nvme_io_md": false, 00:08:40.443 "write_zeroes": true, 00:08:40.443 "zcopy": true, 00:08:40.443 "get_zone_info": false, 00:08:40.443 "zone_management": false, 00:08:40.443 "zone_append": false, 00:08:40.443 "compare": false, 00:08:40.443 "compare_and_write": false, 00:08:40.443 "abort": true, 00:08:40.443 "seek_hole": false, 00:08:40.443 "seek_data": false, 00:08:40.443 "copy": true, 00:08:40.443 "nvme_iov_md": false 00:08:40.443 }, 00:08:40.443 "memory_domains": [ 00:08:40.443 { 00:08:40.443 "dma_device_id": "system", 00:08:40.443 "dma_device_type": 1 00:08:40.443 }, 00:08:40.443 { 00:08:40.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.443 "dma_device_type": 2 00:08:40.443 } 00:08:40.443 ], 00:08:40.443 "driver_specific": {} 00:08:40.443 } 00:08:40.443 ] 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.443 "name": "Existed_Raid", 00:08:40.443 "uuid": "b57a6776-b6c1-499b-9b27-b9fc40a466d5", 00:08:40.443 "strip_size_kb": 64, 00:08:40.443 "state": "online", 00:08:40.443 "raid_level": "concat", 00:08:40.443 "superblock": false, 00:08:40.443 "num_base_bdevs": 3, 00:08:40.443 "num_base_bdevs_discovered": 3, 00:08:40.443 "num_base_bdevs_operational": 3, 00:08:40.443 "base_bdevs_list": [ 00:08:40.443 { 00:08:40.443 "name": "NewBaseBdev", 00:08:40.443 "uuid": "3ae2a967-0577-46d5-9966-2d5f9530e48f", 00:08:40.443 "is_configured": true, 00:08:40.443 "data_offset": 0, 00:08:40.443 "data_size": 65536 00:08:40.443 }, 00:08:40.443 { 00:08:40.443 "name": "BaseBdev2", 00:08:40.443 "uuid": "3037589c-87a9-4eb0-ae4b-8a0ae7cfca03", 00:08:40.443 "is_configured": true, 00:08:40.443 "data_offset": 0, 00:08:40.443 "data_size": 65536 00:08:40.443 }, 00:08:40.443 { 00:08:40.443 "name": "BaseBdev3", 00:08:40.443 "uuid": "072cdd84-56a9-475b-bb76-f6f9ca7547f5", 00:08:40.443 "is_configured": true, 00:08:40.443 "data_offset": 0, 00:08:40.443 "data_size": 65536 00:08:40.443 } 00:08:40.443 ] 00:08:40.443 }' 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.443 23:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.703 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.703 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.703 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.703 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.703 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.703 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.703 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.703 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.703 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.703 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.703 [2024-11-18 23:04:00.043049] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.703 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.964 "name": "Existed_Raid", 00:08:40.964 "aliases": [ 00:08:40.964 "b57a6776-b6c1-499b-9b27-b9fc40a466d5" 00:08:40.964 ], 00:08:40.964 "product_name": "Raid Volume", 00:08:40.964 "block_size": 512, 00:08:40.964 "num_blocks": 196608, 00:08:40.964 "uuid": "b57a6776-b6c1-499b-9b27-b9fc40a466d5", 00:08:40.964 "assigned_rate_limits": { 00:08:40.964 "rw_ios_per_sec": 0, 00:08:40.964 "rw_mbytes_per_sec": 0, 00:08:40.964 "r_mbytes_per_sec": 0, 00:08:40.964 "w_mbytes_per_sec": 0 00:08:40.964 }, 00:08:40.964 "claimed": false, 00:08:40.964 "zoned": false, 00:08:40.964 "supported_io_types": { 00:08:40.964 "read": true, 00:08:40.964 "write": true, 00:08:40.964 "unmap": true, 00:08:40.964 "flush": true, 00:08:40.964 "reset": true, 00:08:40.964 "nvme_admin": false, 00:08:40.964 "nvme_io": false, 00:08:40.964 "nvme_io_md": false, 00:08:40.964 "write_zeroes": true, 00:08:40.964 "zcopy": false, 00:08:40.964 "get_zone_info": false, 00:08:40.964 "zone_management": false, 00:08:40.964 "zone_append": false, 00:08:40.964 "compare": false, 00:08:40.964 "compare_and_write": false, 00:08:40.964 "abort": false, 00:08:40.964 "seek_hole": false, 00:08:40.964 "seek_data": false, 00:08:40.964 "copy": false, 00:08:40.964 "nvme_iov_md": false 00:08:40.964 }, 00:08:40.964 "memory_domains": [ 00:08:40.964 { 00:08:40.964 "dma_device_id": "system", 00:08:40.964 "dma_device_type": 1 00:08:40.964 }, 00:08:40.964 { 00:08:40.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.964 "dma_device_type": 2 00:08:40.964 }, 00:08:40.964 { 00:08:40.964 "dma_device_id": "system", 00:08:40.964 "dma_device_type": 1 00:08:40.964 }, 00:08:40.964 { 00:08:40.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.964 "dma_device_type": 2 00:08:40.964 }, 00:08:40.964 { 00:08:40.964 "dma_device_id": "system", 00:08:40.964 "dma_device_type": 1 00:08:40.964 }, 00:08:40.964 { 00:08:40.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.964 "dma_device_type": 2 00:08:40.964 } 00:08:40.964 ], 00:08:40.964 "driver_specific": { 00:08:40.964 "raid": { 00:08:40.964 "uuid": "b57a6776-b6c1-499b-9b27-b9fc40a466d5", 00:08:40.964 "strip_size_kb": 64, 00:08:40.964 "state": "online", 00:08:40.964 "raid_level": "concat", 00:08:40.964 "superblock": false, 00:08:40.964 "num_base_bdevs": 3, 00:08:40.964 "num_base_bdevs_discovered": 3, 00:08:40.964 "num_base_bdevs_operational": 3, 00:08:40.964 "base_bdevs_list": [ 00:08:40.964 { 00:08:40.964 "name": "NewBaseBdev", 00:08:40.964 "uuid": "3ae2a967-0577-46d5-9966-2d5f9530e48f", 00:08:40.964 "is_configured": true, 00:08:40.964 "data_offset": 0, 00:08:40.964 "data_size": 65536 00:08:40.964 }, 00:08:40.964 { 00:08:40.964 "name": "BaseBdev2", 00:08:40.964 "uuid": "3037589c-87a9-4eb0-ae4b-8a0ae7cfca03", 00:08:40.964 "is_configured": true, 00:08:40.964 "data_offset": 0, 00:08:40.964 "data_size": 65536 00:08:40.964 }, 00:08:40.964 { 00:08:40.964 "name": "BaseBdev3", 00:08:40.964 "uuid": "072cdd84-56a9-475b-bb76-f6f9ca7547f5", 00:08:40.964 "is_configured": true, 00:08:40.964 "data_offset": 0, 00:08:40.964 "data_size": 65536 00:08:40.964 } 00:08:40.964 ] 00:08:40.964 } 00:08:40.964 } 00:08:40.964 }' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:40.964 BaseBdev2 00:08:40.964 BaseBdev3' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.964 [2024-11-18 23:04:00.326316] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.964 [2024-11-18 23:04:00.326379] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.964 [2024-11-18 23:04:00.326470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.964 [2024-11-18 23:04:00.326537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.964 [2024-11-18 23:04:00.326596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76729 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76729 ']' 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76729 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:40.964 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.225 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76729 00:08:41.225 killing process with pid 76729 00:08:41.225 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.225 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.225 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76729' 00:08:41.225 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76729 00:08:41.225 [2024-11-18 23:04:00.375428] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.225 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76729 00:08:41.225 [2024-11-18 23:04:00.405402] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:41.486 00:08:41.486 real 0m8.555s 00:08:41.486 user 0m14.587s 00:08:41.486 sys 0m1.661s 00:08:41.486 ************************************ 00:08:41.486 END TEST raid_state_function_test 00:08:41.486 ************************************ 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.486 23:04:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:41.486 23:04:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:41.486 23:04:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.486 23:04:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.486 ************************************ 00:08:41.486 START TEST raid_state_function_test_sb 00:08:41.486 ************************************ 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:41.486 Process raid pid: 77328 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77328 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77328' 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77328 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77328 ']' 00:08:41.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.486 23:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.486 [2024-11-18 23:04:00.809935] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:41.486 [2024-11-18 23:04:00.810054] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.746 [2024-11-18 23:04:00.970687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.746 [2024-11-18 23:04:01.015360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.746 [2024-11-18 23:04:01.057398] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.746 [2024-11-18 23:04:01.057434] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.315 23:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.315 23:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:42.315 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.316 [2024-11-18 23:04:01.639009] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.316 [2024-11-18 23:04:01.639060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.316 [2024-11-18 23:04:01.639074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.316 [2024-11-18 23:04:01.639084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.316 [2024-11-18 23:04:01.639090] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:42.316 [2024-11-18 23:04:01.639102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.316 23:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.575 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.575 "name": "Existed_Raid", 00:08:42.575 "uuid": "9ffff8f9-70b7-435b-9556-cef7de684157", 00:08:42.575 "strip_size_kb": 64, 00:08:42.575 "state": "configuring", 00:08:42.575 "raid_level": "concat", 00:08:42.575 "superblock": true, 00:08:42.575 "num_base_bdevs": 3, 00:08:42.575 "num_base_bdevs_discovered": 0, 00:08:42.575 "num_base_bdevs_operational": 3, 00:08:42.575 "base_bdevs_list": [ 00:08:42.575 { 00:08:42.575 "name": "BaseBdev1", 00:08:42.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.575 "is_configured": false, 00:08:42.575 "data_offset": 0, 00:08:42.575 "data_size": 0 00:08:42.575 }, 00:08:42.575 { 00:08:42.575 "name": "BaseBdev2", 00:08:42.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.575 "is_configured": false, 00:08:42.575 "data_offset": 0, 00:08:42.575 "data_size": 0 00:08:42.575 }, 00:08:42.575 { 00:08:42.575 "name": "BaseBdev3", 00:08:42.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.575 "is_configured": false, 00:08:42.575 "data_offset": 0, 00:08:42.575 "data_size": 0 00:08:42.575 } 00:08:42.575 ] 00:08:42.575 }' 00:08:42.575 23:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.575 23:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.852 [2024-11-18 23:04:02.042184] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.852 [2024-11-18 23:04:02.042268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.852 [2024-11-18 23:04:02.054205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.852 [2024-11-18 23:04:02.054291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.852 [2024-11-18 23:04:02.054328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.852 [2024-11-18 23:04:02.054350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.852 [2024-11-18 23:04:02.054368] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:42.852 [2024-11-18 23:04:02.054388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.852 [2024-11-18 23:04:02.075052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.852 BaseBdev1 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.852 [ 00:08:42.852 { 00:08:42.852 "name": "BaseBdev1", 00:08:42.852 "aliases": [ 00:08:42.852 "5c61c87c-3f0c-4de2-a1df-0db5070b2856" 00:08:42.852 ], 00:08:42.852 "product_name": "Malloc disk", 00:08:42.852 "block_size": 512, 00:08:42.852 "num_blocks": 65536, 00:08:42.852 "uuid": "5c61c87c-3f0c-4de2-a1df-0db5070b2856", 00:08:42.852 "assigned_rate_limits": { 00:08:42.852 "rw_ios_per_sec": 0, 00:08:42.852 "rw_mbytes_per_sec": 0, 00:08:42.852 "r_mbytes_per_sec": 0, 00:08:42.852 "w_mbytes_per_sec": 0 00:08:42.852 }, 00:08:42.852 "claimed": true, 00:08:42.852 "claim_type": "exclusive_write", 00:08:42.852 "zoned": false, 00:08:42.852 "supported_io_types": { 00:08:42.852 "read": true, 00:08:42.852 "write": true, 00:08:42.852 "unmap": true, 00:08:42.852 "flush": true, 00:08:42.852 "reset": true, 00:08:42.852 "nvme_admin": false, 00:08:42.852 "nvme_io": false, 00:08:42.852 "nvme_io_md": false, 00:08:42.852 "write_zeroes": true, 00:08:42.852 "zcopy": true, 00:08:42.852 "get_zone_info": false, 00:08:42.852 "zone_management": false, 00:08:42.852 "zone_append": false, 00:08:42.852 "compare": false, 00:08:42.852 "compare_and_write": false, 00:08:42.852 "abort": true, 00:08:42.852 "seek_hole": false, 00:08:42.852 "seek_data": false, 00:08:42.852 "copy": true, 00:08:42.852 "nvme_iov_md": false 00:08:42.852 }, 00:08:42.852 "memory_domains": [ 00:08:42.852 { 00:08:42.852 "dma_device_id": "system", 00:08:42.852 "dma_device_type": 1 00:08:42.852 }, 00:08:42.852 { 00:08:42.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.852 "dma_device_type": 2 00:08:42.852 } 00:08:42.852 ], 00:08:42.852 "driver_specific": {} 00:08:42.852 } 00:08:42.852 ] 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.852 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.853 "name": "Existed_Raid", 00:08:42.853 "uuid": "8711476e-4742-466a-9824-6bc42eddf4bc", 00:08:42.853 "strip_size_kb": 64, 00:08:42.853 "state": "configuring", 00:08:42.853 "raid_level": "concat", 00:08:42.853 "superblock": true, 00:08:42.853 "num_base_bdevs": 3, 00:08:42.853 "num_base_bdevs_discovered": 1, 00:08:42.853 "num_base_bdevs_operational": 3, 00:08:42.853 "base_bdevs_list": [ 00:08:42.853 { 00:08:42.853 "name": "BaseBdev1", 00:08:42.853 "uuid": "5c61c87c-3f0c-4de2-a1df-0db5070b2856", 00:08:42.853 "is_configured": true, 00:08:42.853 "data_offset": 2048, 00:08:42.853 "data_size": 63488 00:08:42.853 }, 00:08:42.853 { 00:08:42.853 "name": "BaseBdev2", 00:08:42.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.853 "is_configured": false, 00:08:42.853 "data_offset": 0, 00:08:42.853 "data_size": 0 00:08:42.853 }, 00:08:42.853 { 00:08:42.853 "name": "BaseBdev3", 00:08:42.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.853 "is_configured": false, 00:08:42.853 "data_offset": 0, 00:08:42.853 "data_size": 0 00:08:42.853 } 00:08:42.853 ] 00:08:42.853 }' 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.853 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.129 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.129 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.129 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.396 [2024-11-18 23:04:02.506354] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.396 [2024-11-18 23:04:02.506404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.396 [2024-11-18 23:04:02.514382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.396 [2024-11-18 23:04:02.516254] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.396 [2024-11-18 23:04:02.516308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.396 [2024-11-18 23:04:02.516319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:43.396 [2024-11-18 23:04:02.516329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.396 "name": "Existed_Raid", 00:08:43.396 "uuid": "218d3b25-9be2-4c76-842b-8a396aa92f77", 00:08:43.396 "strip_size_kb": 64, 00:08:43.396 "state": "configuring", 00:08:43.396 "raid_level": "concat", 00:08:43.396 "superblock": true, 00:08:43.396 "num_base_bdevs": 3, 00:08:43.396 "num_base_bdevs_discovered": 1, 00:08:43.396 "num_base_bdevs_operational": 3, 00:08:43.396 "base_bdevs_list": [ 00:08:43.396 { 00:08:43.396 "name": "BaseBdev1", 00:08:43.396 "uuid": "5c61c87c-3f0c-4de2-a1df-0db5070b2856", 00:08:43.396 "is_configured": true, 00:08:43.396 "data_offset": 2048, 00:08:43.396 "data_size": 63488 00:08:43.396 }, 00:08:43.396 { 00:08:43.396 "name": "BaseBdev2", 00:08:43.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.396 "is_configured": false, 00:08:43.396 "data_offset": 0, 00:08:43.396 "data_size": 0 00:08:43.396 }, 00:08:43.396 { 00:08:43.396 "name": "BaseBdev3", 00:08:43.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.396 "is_configured": false, 00:08:43.396 "data_offset": 0, 00:08:43.396 "data_size": 0 00:08:43.396 } 00:08:43.396 ] 00:08:43.396 }' 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.396 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.656 [2024-11-18 23:04:02.949627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.656 BaseBdev2 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.656 [ 00:08:43.656 { 00:08:43.656 "name": "BaseBdev2", 00:08:43.656 "aliases": [ 00:08:43.656 "dadd19d9-a17a-4baf-b728-f4d45fd8d4bd" 00:08:43.656 ], 00:08:43.656 "product_name": "Malloc disk", 00:08:43.656 "block_size": 512, 00:08:43.656 "num_blocks": 65536, 00:08:43.656 "uuid": "dadd19d9-a17a-4baf-b728-f4d45fd8d4bd", 00:08:43.656 "assigned_rate_limits": { 00:08:43.656 "rw_ios_per_sec": 0, 00:08:43.656 "rw_mbytes_per_sec": 0, 00:08:43.656 "r_mbytes_per_sec": 0, 00:08:43.656 "w_mbytes_per_sec": 0 00:08:43.656 }, 00:08:43.656 "claimed": true, 00:08:43.656 "claim_type": "exclusive_write", 00:08:43.656 "zoned": false, 00:08:43.656 "supported_io_types": { 00:08:43.656 "read": true, 00:08:43.656 "write": true, 00:08:43.656 "unmap": true, 00:08:43.656 "flush": true, 00:08:43.656 "reset": true, 00:08:43.656 "nvme_admin": false, 00:08:43.656 "nvme_io": false, 00:08:43.656 "nvme_io_md": false, 00:08:43.656 "write_zeroes": true, 00:08:43.656 "zcopy": true, 00:08:43.656 "get_zone_info": false, 00:08:43.656 "zone_management": false, 00:08:43.656 "zone_append": false, 00:08:43.656 "compare": false, 00:08:43.656 "compare_and_write": false, 00:08:43.656 "abort": true, 00:08:43.656 "seek_hole": false, 00:08:43.656 "seek_data": false, 00:08:43.656 "copy": true, 00:08:43.656 "nvme_iov_md": false 00:08:43.656 }, 00:08:43.656 "memory_domains": [ 00:08:43.656 { 00:08:43.656 "dma_device_id": "system", 00:08:43.656 "dma_device_type": 1 00:08:43.656 }, 00:08:43.656 { 00:08:43.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.656 "dma_device_type": 2 00:08:43.656 } 00:08:43.656 ], 00:08:43.656 "driver_specific": {} 00:08:43.656 } 00:08:43.656 ] 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.656 23:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.656 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.917 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.917 "name": "Existed_Raid", 00:08:43.917 "uuid": "218d3b25-9be2-4c76-842b-8a396aa92f77", 00:08:43.917 "strip_size_kb": 64, 00:08:43.917 "state": "configuring", 00:08:43.917 "raid_level": "concat", 00:08:43.917 "superblock": true, 00:08:43.917 "num_base_bdevs": 3, 00:08:43.917 "num_base_bdevs_discovered": 2, 00:08:43.917 "num_base_bdevs_operational": 3, 00:08:43.917 "base_bdevs_list": [ 00:08:43.917 { 00:08:43.917 "name": "BaseBdev1", 00:08:43.917 "uuid": "5c61c87c-3f0c-4de2-a1df-0db5070b2856", 00:08:43.917 "is_configured": true, 00:08:43.917 "data_offset": 2048, 00:08:43.917 "data_size": 63488 00:08:43.917 }, 00:08:43.917 { 00:08:43.917 "name": "BaseBdev2", 00:08:43.917 "uuid": "dadd19d9-a17a-4baf-b728-f4d45fd8d4bd", 00:08:43.917 "is_configured": true, 00:08:43.917 "data_offset": 2048, 00:08:43.917 "data_size": 63488 00:08:43.917 }, 00:08:43.917 { 00:08:43.917 "name": "BaseBdev3", 00:08:43.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.917 "is_configured": false, 00:08:43.917 "data_offset": 0, 00:08:43.917 "data_size": 0 00:08:43.917 } 00:08:43.917 ] 00:08:43.917 }' 00:08:43.917 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.917 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.176 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:44.176 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.177 BaseBdev3 00:08:44.177 [2024-11-18 23:04:03.423785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.177 [2024-11-18 23:04:03.423978] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:44.177 [2024-11-18 23:04:03.424002] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:44.177 [2024-11-18 23:04:03.424316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:44.177 [2024-11-18 23:04:03.424437] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:44.177 [2024-11-18 23:04:03.424452] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:44.177 [2024-11-18 23:04:03.424573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.177 [ 00:08:44.177 { 00:08:44.177 "name": "BaseBdev3", 00:08:44.177 "aliases": [ 00:08:44.177 "ae12e4ca-2ed0-42b9-8df0-ec6969153a40" 00:08:44.177 ], 00:08:44.177 "product_name": "Malloc disk", 00:08:44.177 "block_size": 512, 00:08:44.177 "num_blocks": 65536, 00:08:44.177 "uuid": "ae12e4ca-2ed0-42b9-8df0-ec6969153a40", 00:08:44.177 "assigned_rate_limits": { 00:08:44.177 "rw_ios_per_sec": 0, 00:08:44.177 "rw_mbytes_per_sec": 0, 00:08:44.177 "r_mbytes_per_sec": 0, 00:08:44.177 "w_mbytes_per_sec": 0 00:08:44.177 }, 00:08:44.177 "claimed": true, 00:08:44.177 "claim_type": "exclusive_write", 00:08:44.177 "zoned": false, 00:08:44.177 "supported_io_types": { 00:08:44.177 "read": true, 00:08:44.177 "write": true, 00:08:44.177 "unmap": true, 00:08:44.177 "flush": true, 00:08:44.177 "reset": true, 00:08:44.177 "nvme_admin": false, 00:08:44.177 "nvme_io": false, 00:08:44.177 "nvme_io_md": false, 00:08:44.177 "write_zeroes": true, 00:08:44.177 "zcopy": true, 00:08:44.177 "get_zone_info": false, 00:08:44.177 "zone_management": false, 00:08:44.177 "zone_append": false, 00:08:44.177 "compare": false, 00:08:44.177 "compare_and_write": false, 00:08:44.177 "abort": true, 00:08:44.177 "seek_hole": false, 00:08:44.177 "seek_data": false, 00:08:44.177 "copy": true, 00:08:44.177 "nvme_iov_md": false 00:08:44.177 }, 00:08:44.177 "memory_domains": [ 00:08:44.177 { 00:08:44.177 "dma_device_id": "system", 00:08:44.177 "dma_device_type": 1 00:08:44.177 }, 00:08:44.177 { 00:08:44.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.177 "dma_device_type": 2 00:08:44.177 } 00:08:44.177 ], 00:08:44.177 "driver_specific": {} 00:08:44.177 } 00:08:44.177 ] 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.177 "name": "Existed_Raid", 00:08:44.177 "uuid": "218d3b25-9be2-4c76-842b-8a396aa92f77", 00:08:44.177 "strip_size_kb": 64, 00:08:44.177 "state": "online", 00:08:44.177 "raid_level": "concat", 00:08:44.177 "superblock": true, 00:08:44.177 "num_base_bdevs": 3, 00:08:44.177 "num_base_bdevs_discovered": 3, 00:08:44.177 "num_base_bdevs_operational": 3, 00:08:44.177 "base_bdevs_list": [ 00:08:44.177 { 00:08:44.177 "name": "BaseBdev1", 00:08:44.177 "uuid": "5c61c87c-3f0c-4de2-a1df-0db5070b2856", 00:08:44.177 "is_configured": true, 00:08:44.177 "data_offset": 2048, 00:08:44.177 "data_size": 63488 00:08:44.177 }, 00:08:44.177 { 00:08:44.177 "name": "BaseBdev2", 00:08:44.177 "uuid": "dadd19d9-a17a-4baf-b728-f4d45fd8d4bd", 00:08:44.177 "is_configured": true, 00:08:44.177 "data_offset": 2048, 00:08:44.177 "data_size": 63488 00:08:44.177 }, 00:08:44.177 { 00:08:44.177 "name": "BaseBdev3", 00:08:44.177 "uuid": "ae12e4ca-2ed0-42b9-8df0-ec6969153a40", 00:08:44.177 "is_configured": true, 00:08:44.177 "data_offset": 2048, 00:08:44.177 "data_size": 63488 00:08:44.177 } 00:08:44.177 ] 00:08:44.177 }' 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.177 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.746 [2024-11-18 23:04:03.895347] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.746 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.746 "name": "Existed_Raid", 00:08:44.746 "aliases": [ 00:08:44.746 "218d3b25-9be2-4c76-842b-8a396aa92f77" 00:08:44.746 ], 00:08:44.746 "product_name": "Raid Volume", 00:08:44.746 "block_size": 512, 00:08:44.746 "num_blocks": 190464, 00:08:44.746 "uuid": "218d3b25-9be2-4c76-842b-8a396aa92f77", 00:08:44.746 "assigned_rate_limits": { 00:08:44.746 "rw_ios_per_sec": 0, 00:08:44.746 "rw_mbytes_per_sec": 0, 00:08:44.746 "r_mbytes_per_sec": 0, 00:08:44.746 "w_mbytes_per_sec": 0 00:08:44.746 }, 00:08:44.746 "claimed": false, 00:08:44.746 "zoned": false, 00:08:44.746 "supported_io_types": { 00:08:44.746 "read": true, 00:08:44.746 "write": true, 00:08:44.746 "unmap": true, 00:08:44.746 "flush": true, 00:08:44.746 "reset": true, 00:08:44.746 "nvme_admin": false, 00:08:44.746 "nvme_io": false, 00:08:44.746 "nvme_io_md": false, 00:08:44.746 "write_zeroes": true, 00:08:44.746 "zcopy": false, 00:08:44.746 "get_zone_info": false, 00:08:44.746 "zone_management": false, 00:08:44.747 "zone_append": false, 00:08:44.747 "compare": false, 00:08:44.747 "compare_and_write": false, 00:08:44.747 "abort": false, 00:08:44.747 "seek_hole": false, 00:08:44.747 "seek_data": false, 00:08:44.747 "copy": false, 00:08:44.747 "nvme_iov_md": false 00:08:44.747 }, 00:08:44.747 "memory_domains": [ 00:08:44.747 { 00:08:44.747 "dma_device_id": "system", 00:08:44.747 "dma_device_type": 1 00:08:44.747 }, 00:08:44.747 { 00:08:44.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.747 "dma_device_type": 2 00:08:44.747 }, 00:08:44.747 { 00:08:44.747 "dma_device_id": "system", 00:08:44.747 "dma_device_type": 1 00:08:44.747 }, 00:08:44.747 { 00:08:44.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.747 "dma_device_type": 2 00:08:44.747 }, 00:08:44.747 { 00:08:44.747 "dma_device_id": "system", 00:08:44.747 "dma_device_type": 1 00:08:44.747 }, 00:08:44.747 { 00:08:44.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.747 "dma_device_type": 2 00:08:44.747 } 00:08:44.747 ], 00:08:44.747 "driver_specific": { 00:08:44.747 "raid": { 00:08:44.747 "uuid": "218d3b25-9be2-4c76-842b-8a396aa92f77", 00:08:44.747 "strip_size_kb": 64, 00:08:44.747 "state": "online", 00:08:44.747 "raid_level": "concat", 00:08:44.747 "superblock": true, 00:08:44.747 "num_base_bdevs": 3, 00:08:44.747 "num_base_bdevs_discovered": 3, 00:08:44.747 "num_base_bdevs_operational": 3, 00:08:44.747 "base_bdevs_list": [ 00:08:44.747 { 00:08:44.747 "name": "BaseBdev1", 00:08:44.747 "uuid": "5c61c87c-3f0c-4de2-a1df-0db5070b2856", 00:08:44.747 "is_configured": true, 00:08:44.747 "data_offset": 2048, 00:08:44.747 "data_size": 63488 00:08:44.747 }, 00:08:44.747 { 00:08:44.747 "name": "BaseBdev2", 00:08:44.747 "uuid": "dadd19d9-a17a-4baf-b728-f4d45fd8d4bd", 00:08:44.747 "is_configured": true, 00:08:44.747 "data_offset": 2048, 00:08:44.747 "data_size": 63488 00:08:44.747 }, 00:08:44.747 { 00:08:44.747 "name": "BaseBdev3", 00:08:44.747 "uuid": "ae12e4ca-2ed0-42b9-8df0-ec6969153a40", 00:08:44.747 "is_configured": true, 00:08:44.747 "data_offset": 2048, 00:08:44.747 "data_size": 63488 00:08:44.747 } 00:08:44.747 ] 00:08:44.747 } 00:08:44.747 } 00:08:44.747 }' 00:08:44.747 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.747 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:44.747 BaseBdev2 00:08:44.747 BaseBdev3' 00:08:44.747 23:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.747 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.007 [2024-11-18 23:04:04.154648] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.007 [2024-11-18 23:04:04.154714] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.007 [2024-11-18 23:04:04.154767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.007 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.007 "name": "Existed_Raid", 00:08:45.007 "uuid": "218d3b25-9be2-4c76-842b-8a396aa92f77", 00:08:45.007 "strip_size_kb": 64, 00:08:45.007 "state": "offline", 00:08:45.007 "raid_level": "concat", 00:08:45.007 "superblock": true, 00:08:45.008 "num_base_bdevs": 3, 00:08:45.008 "num_base_bdevs_discovered": 2, 00:08:45.008 "num_base_bdevs_operational": 2, 00:08:45.008 "base_bdevs_list": [ 00:08:45.008 { 00:08:45.008 "name": null, 00:08:45.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.008 "is_configured": false, 00:08:45.008 "data_offset": 0, 00:08:45.008 "data_size": 63488 00:08:45.008 }, 00:08:45.008 { 00:08:45.008 "name": "BaseBdev2", 00:08:45.008 "uuid": "dadd19d9-a17a-4baf-b728-f4d45fd8d4bd", 00:08:45.008 "is_configured": true, 00:08:45.008 "data_offset": 2048, 00:08:45.008 "data_size": 63488 00:08:45.008 }, 00:08:45.008 { 00:08:45.008 "name": "BaseBdev3", 00:08:45.008 "uuid": "ae12e4ca-2ed0-42b9-8df0-ec6969153a40", 00:08:45.008 "is_configured": true, 00:08:45.008 "data_offset": 2048, 00:08:45.008 "data_size": 63488 00:08:45.008 } 00:08:45.008 ] 00:08:45.008 }' 00:08:45.008 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.008 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.268 [2024-11-18 23:04:04.625104] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.268 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.529 [2024-11-18 23:04:04.691986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:45.529 [2024-11-18 23:04:04.692031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:45.529 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.530 BaseBdev2 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.530 [ 00:08:45.530 { 00:08:45.530 "name": "BaseBdev2", 00:08:45.530 "aliases": [ 00:08:45.530 "e104e927-f943-41d6-bccb-d3bdbe3218fb" 00:08:45.530 ], 00:08:45.530 "product_name": "Malloc disk", 00:08:45.530 "block_size": 512, 00:08:45.530 "num_blocks": 65536, 00:08:45.530 "uuid": "e104e927-f943-41d6-bccb-d3bdbe3218fb", 00:08:45.530 "assigned_rate_limits": { 00:08:45.530 "rw_ios_per_sec": 0, 00:08:45.530 "rw_mbytes_per_sec": 0, 00:08:45.530 "r_mbytes_per_sec": 0, 00:08:45.530 "w_mbytes_per_sec": 0 00:08:45.530 }, 00:08:45.530 "claimed": false, 00:08:45.530 "zoned": false, 00:08:45.530 "supported_io_types": { 00:08:45.530 "read": true, 00:08:45.530 "write": true, 00:08:45.530 "unmap": true, 00:08:45.530 "flush": true, 00:08:45.530 "reset": true, 00:08:45.530 "nvme_admin": false, 00:08:45.530 "nvme_io": false, 00:08:45.530 "nvme_io_md": false, 00:08:45.530 "write_zeroes": true, 00:08:45.530 "zcopy": true, 00:08:45.530 "get_zone_info": false, 00:08:45.530 "zone_management": false, 00:08:45.530 "zone_append": false, 00:08:45.530 "compare": false, 00:08:45.530 "compare_and_write": false, 00:08:45.530 "abort": true, 00:08:45.530 "seek_hole": false, 00:08:45.530 "seek_data": false, 00:08:45.530 "copy": true, 00:08:45.530 "nvme_iov_md": false 00:08:45.530 }, 00:08:45.530 "memory_domains": [ 00:08:45.530 { 00:08:45.530 "dma_device_id": "system", 00:08:45.530 "dma_device_type": 1 00:08:45.530 }, 00:08:45.530 { 00:08:45.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.530 "dma_device_type": 2 00:08:45.530 } 00:08:45.530 ], 00:08:45.530 "driver_specific": {} 00:08:45.530 } 00:08:45.530 ] 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.530 BaseBdev3 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.530 [ 00:08:45.530 { 00:08:45.530 "name": "BaseBdev3", 00:08:45.530 "aliases": [ 00:08:45.530 "18b032d6-5ee6-43a5-9674-84cb8f44ddcd" 00:08:45.530 ], 00:08:45.530 "product_name": "Malloc disk", 00:08:45.530 "block_size": 512, 00:08:45.530 "num_blocks": 65536, 00:08:45.530 "uuid": "18b032d6-5ee6-43a5-9674-84cb8f44ddcd", 00:08:45.530 "assigned_rate_limits": { 00:08:45.530 "rw_ios_per_sec": 0, 00:08:45.530 "rw_mbytes_per_sec": 0, 00:08:45.530 "r_mbytes_per_sec": 0, 00:08:45.530 "w_mbytes_per_sec": 0 00:08:45.530 }, 00:08:45.530 "claimed": false, 00:08:45.530 "zoned": false, 00:08:45.530 "supported_io_types": { 00:08:45.530 "read": true, 00:08:45.530 "write": true, 00:08:45.530 "unmap": true, 00:08:45.530 "flush": true, 00:08:45.530 "reset": true, 00:08:45.530 "nvme_admin": false, 00:08:45.530 "nvme_io": false, 00:08:45.530 "nvme_io_md": false, 00:08:45.530 "write_zeroes": true, 00:08:45.530 "zcopy": true, 00:08:45.530 "get_zone_info": false, 00:08:45.530 "zone_management": false, 00:08:45.530 "zone_append": false, 00:08:45.530 "compare": false, 00:08:45.530 "compare_and_write": false, 00:08:45.530 "abort": true, 00:08:45.530 "seek_hole": false, 00:08:45.530 "seek_data": false, 00:08:45.530 "copy": true, 00:08:45.530 "nvme_iov_md": false 00:08:45.530 }, 00:08:45.530 "memory_domains": [ 00:08:45.530 { 00:08:45.530 "dma_device_id": "system", 00:08:45.530 "dma_device_type": 1 00:08:45.530 }, 00:08:45.530 { 00:08:45.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.530 "dma_device_type": 2 00:08:45.530 } 00:08:45.530 ], 00:08:45.530 "driver_specific": {} 00:08:45.530 } 00:08:45.530 ] 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.530 [2024-11-18 23:04:04.866660] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.530 [2024-11-18 23:04:04.866754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.530 [2024-11-18 23:04:04.866795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.530 [2024-11-18 23:04:04.868578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.530 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.791 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.791 "name": "Existed_Raid", 00:08:45.791 "uuid": "f1c6ee4d-d2a2-4925-a2ed-4ae636c09116", 00:08:45.791 "strip_size_kb": 64, 00:08:45.791 "state": "configuring", 00:08:45.791 "raid_level": "concat", 00:08:45.791 "superblock": true, 00:08:45.791 "num_base_bdevs": 3, 00:08:45.791 "num_base_bdevs_discovered": 2, 00:08:45.791 "num_base_bdevs_operational": 3, 00:08:45.791 "base_bdevs_list": [ 00:08:45.791 { 00:08:45.791 "name": "BaseBdev1", 00:08:45.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.791 "is_configured": false, 00:08:45.791 "data_offset": 0, 00:08:45.791 "data_size": 0 00:08:45.791 }, 00:08:45.791 { 00:08:45.791 "name": "BaseBdev2", 00:08:45.791 "uuid": "e104e927-f943-41d6-bccb-d3bdbe3218fb", 00:08:45.791 "is_configured": true, 00:08:45.791 "data_offset": 2048, 00:08:45.791 "data_size": 63488 00:08:45.791 }, 00:08:45.791 { 00:08:45.791 "name": "BaseBdev3", 00:08:45.791 "uuid": "18b032d6-5ee6-43a5-9674-84cb8f44ddcd", 00:08:45.791 "is_configured": true, 00:08:45.791 "data_offset": 2048, 00:08:45.791 "data_size": 63488 00:08:45.791 } 00:08:45.791 ] 00:08:45.791 }' 00:08:45.791 23:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.791 23:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.052 [2024-11-18 23:04:05.305865] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.052 "name": "Existed_Raid", 00:08:46.052 "uuid": "f1c6ee4d-d2a2-4925-a2ed-4ae636c09116", 00:08:46.052 "strip_size_kb": 64, 00:08:46.052 "state": "configuring", 00:08:46.052 "raid_level": "concat", 00:08:46.052 "superblock": true, 00:08:46.052 "num_base_bdevs": 3, 00:08:46.052 "num_base_bdevs_discovered": 1, 00:08:46.052 "num_base_bdevs_operational": 3, 00:08:46.052 "base_bdevs_list": [ 00:08:46.052 { 00:08:46.052 "name": "BaseBdev1", 00:08:46.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.052 "is_configured": false, 00:08:46.052 "data_offset": 0, 00:08:46.052 "data_size": 0 00:08:46.052 }, 00:08:46.052 { 00:08:46.052 "name": null, 00:08:46.052 "uuid": "e104e927-f943-41d6-bccb-d3bdbe3218fb", 00:08:46.052 "is_configured": false, 00:08:46.052 "data_offset": 0, 00:08:46.052 "data_size": 63488 00:08:46.052 }, 00:08:46.052 { 00:08:46.052 "name": "BaseBdev3", 00:08:46.052 "uuid": "18b032d6-5ee6-43a5-9674-84cb8f44ddcd", 00:08:46.052 "is_configured": true, 00:08:46.052 "data_offset": 2048, 00:08:46.052 "data_size": 63488 00:08:46.052 } 00:08:46.052 ] 00:08:46.052 }' 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.052 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.621 [2024-11-18 23:04:05.736082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.621 BaseBdev1 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.621 [ 00:08:46.621 { 00:08:46.621 "name": "BaseBdev1", 00:08:46.621 "aliases": [ 00:08:46.621 "66d93dd5-952a-404c-bc44-69dfb9796350" 00:08:46.621 ], 00:08:46.621 "product_name": "Malloc disk", 00:08:46.621 "block_size": 512, 00:08:46.621 "num_blocks": 65536, 00:08:46.621 "uuid": "66d93dd5-952a-404c-bc44-69dfb9796350", 00:08:46.621 "assigned_rate_limits": { 00:08:46.621 "rw_ios_per_sec": 0, 00:08:46.621 "rw_mbytes_per_sec": 0, 00:08:46.621 "r_mbytes_per_sec": 0, 00:08:46.621 "w_mbytes_per_sec": 0 00:08:46.621 }, 00:08:46.621 "claimed": true, 00:08:46.621 "claim_type": "exclusive_write", 00:08:46.621 "zoned": false, 00:08:46.621 "supported_io_types": { 00:08:46.621 "read": true, 00:08:46.621 "write": true, 00:08:46.621 "unmap": true, 00:08:46.621 "flush": true, 00:08:46.621 "reset": true, 00:08:46.621 "nvme_admin": false, 00:08:46.621 "nvme_io": false, 00:08:46.621 "nvme_io_md": false, 00:08:46.621 "write_zeroes": true, 00:08:46.621 "zcopy": true, 00:08:46.621 "get_zone_info": false, 00:08:46.621 "zone_management": false, 00:08:46.621 "zone_append": false, 00:08:46.621 "compare": false, 00:08:46.621 "compare_and_write": false, 00:08:46.621 "abort": true, 00:08:46.621 "seek_hole": false, 00:08:46.621 "seek_data": false, 00:08:46.621 "copy": true, 00:08:46.621 "nvme_iov_md": false 00:08:46.621 }, 00:08:46.621 "memory_domains": [ 00:08:46.621 { 00:08:46.621 "dma_device_id": "system", 00:08:46.621 "dma_device_type": 1 00:08:46.621 }, 00:08:46.621 { 00:08:46.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.621 "dma_device_type": 2 00:08:46.621 } 00:08:46.621 ], 00:08:46.621 "driver_specific": {} 00:08:46.621 } 00:08:46.621 ] 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.621 "name": "Existed_Raid", 00:08:46.621 "uuid": "f1c6ee4d-d2a2-4925-a2ed-4ae636c09116", 00:08:46.621 "strip_size_kb": 64, 00:08:46.621 "state": "configuring", 00:08:46.621 "raid_level": "concat", 00:08:46.621 "superblock": true, 00:08:46.621 "num_base_bdevs": 3, 00:08:46.621 "num_base_bdevs_discovered": 2, 00:08:46.621 "num_base_bdevs_operational": 3, 00:08:46.621 "base_bdevs_list": [ 00:08:46.621 { 00:08:46.621 "name": "BaseBdev1", 00:08:46.621 "uuid": "66d93dd5-952a-404c-bc44-69dfb9796350", 00:08:46.621 "is_configured": true, 00:08:46.621 "data_offset": 2048, 00:08:46.621 "data_size": 63488 00:08:46.621 }, 00:08:46.621 { 00:08:46.621 "name": null, 00:08:46.621 "uuid": "e104e927-f943-41d6-bccb-d3bdbe3218fb", 00:08:46.621 "is_configured": false, 00:08:46.621 "data_offset": 0, 00:08:46.621 "data_size": 63488 00:08:46.621 }, 00:08:46.621 { 00:08:46.621 "name": "BaseBdev3", 00:08:46.621 "uuid": "18b032d6-5ee6-43a5-9674-84cb8f44ddcd", 00:08:46.621 "is_configured": true, 00:08:46.621 "data_offset": 2048, 00:08:46.621 "data_size": 63488 00:08:46.621 } 00:08:46.621 ] 00:08:46.621 }' 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.621 23:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.881 [2024-11-18 23:04:06.207324] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.881 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.141 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.141 "name": "Existed_Raid", 00:08:47.141 "uuid": "f1c6ee4d-d2a2-4925-a2ed-4ae636c09116", 00:08:47.141 "strip_size_kb": 64, 00:08:47.141 "state": "configuring", 00:08:47.141 "raid_level": "concat", 00:08:47.141 "superblock": true, 00:08:47.141 "num_base_bdevs": 3, 00:08:47.141 "num_base_bdevs_discovered": 1, 00:08:47.141 "num_base_bdevs_operational": 3, 00:08:47.141 "base_bdevs_list": [ 00:08:47.141 { 00:08:47.141 "name": "BaseBdev1", 00:08:47.141 "uuid": "66d93dd5-952a-404c-bc44-69dfb9796350", 00:08:47.141 "is_configured": true, 00:08:47.141 "data_offset": 2048, 00:08:47.141 "data_size": 63488 00:08:47.141 }, 00:08:47.141 { 00:08:47.141 "name": null, 00:08:47.141 "uuid": "e104e927-f943-41d6-bccb-d3bdbe3218fb", 00:08:47.141 "is_configured": false, 00:08:47.141 "data_offset": 0, 00:08:47.141 "data_size": 63488 00:08:47.141 }, 00:08:47.141 { 00:08:47.141 "name": null, 00:08:47.141 "uuid": "18b032d6-5ee6-43a5-9674-84cb8f44ddcd", 00:08:47.141 "is_configured": false, 00:08:47.141 "data_offset": 0, 00:08:47.141 "data_size": 63488 00:08:47.141 } 00:08:47.141 ] 00:08:47.141 }' 00:08:47.141 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.141 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.401 [2024-11-18 23:04:06.630707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.401 "name": "Existed_Raid", 00:08:47.401 "uuid": "f1c6ee4d-d2a2-4925-a2ed-4ae636c09116", 00:08:47.401 "strip_size_kb": 64, 00:08:47.401 "state": "configuring", 00:08:47.401 "raid_level": "concat", 00:08:47.401 "superblock": true, 00:08:47.401 "num_base_bdevs": 3, 00:08:47.401 "num_base_bdevs_discovered": 2, 00:08:47.401 "num_base_bdevs_operational": 3, 00:08:47.401 "base_bdevs_list": [ 00:08:47.401 { 00:08:47.401 "name": "BaseBdev1", 00:08:47.401 "uuid": "66d93dd5-952a-404c-bc44-69dfb9796350", 00:08:47.401 "is_configured": true, 00:08:47.401 "data_offset": 2048, 00:08:47.401 "data_size": 63488 00:08:47.401 }, 00:08:47.401 { 00:08:47.401 "name": null, 00:08:47.401 "uuid": "e104e927-f943-41d6-bccb-d3bdbe3218fb", 00:08:47.401 "is_configured": false, 00:08:47.401 "data_offset": 0, 00:08:47.401 "data_size": 63488 00:08:47.401 }, 00:08:47.401 { 00:08:47.401 "name": "BaseBdev3", 00:08:47.401 "uuid": "18b032d6-5ee6-43a5-9674-84cb8f44ddcd", 00:08:47.401 "is_configured": true, 00:08:47.401 "data_offset": 2048, 00:08:47.401 "data_size": 63488 00:08:47.401 } 00:08:47.401 ] 00:08:47.401 }' 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.401 23:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.972 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:47.972 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.972 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.973 [2024-11-18 23:04:07.089955] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.973 "name": "Existed_Raid", 00:08:47.973 "uuid": "f1c6ee4d-d2a2-4925-a2ed-4ae636c09116", 00:08:47.973 "strip_size_kb": 64, 00:08:47.973 "state": "configuring", 00:08:47.973 "raid_level": "concat", 00:08:47.973 "superblock": true, 00:08:47.973 "num_base_bdevs": 3, 00:08:47.973 "num_base_bdevs_discovered": 1, 00:08:47.973 "num_base_bdevs_operational": 3, 00:08:47.973 "base_bdevs_list": [ 00:08:47.973 { 00:08:47.973 "name": null, 00:08:47.973 "uuid": "66d93dd5-952a-404c-bc44-69dfb9796350", 00:08:47.973 "is_configured": false, 00:08:47.973 "data_offset": 0, 00:08:47.973 "data_size": 63488 00:08:47.973 }, 00:08:47.973 { 00:08:47.973 "name": null, 00:08:47.973 "uuid": "e104e927-f943-41d6-bccb-d3bdbe3218fb", 00:08:47.973 "is_configured": false, 00:08:47.973 "data_offset": 0, 00:08:47.973 "data_size": 63488 00:08:47.973 }, 00:08:47.973 { 00:08:47.973 "name": "BaseBdev3", 00:08:47.973 "uuid": "18b032d6-5ee6-43a5-9674-84cb8f44ddcd", 00:08:47.973 "is_configured": true, 00:08:47.973 "data_offset": 2048, 00:08:47.973 "data_size": 63488 00:08:47.973 } 00:08:47.973 ] 00:08:47.973 }' 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.973 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.233 [2024-11-18 23:04:07.559506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.233 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.493 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.493 "name": "Existed_Raid", 00:08:48.493 "uuid": "f1c6ee4d-d2a2-4925-a2ed-4ae636c09116", 00:08:48.493 "strip_size_kb": 64, 00:08:48.493 "state": "configuring", 00:08:48.493 "raid_level": "concat", 00:08:48.493 "superblock": true, 00:08:48.493 "num_base_bdevs": 3, 00:08:48.493 "num_base_bdevs_discovered": 2, 00:08:48.493 "num_base_bdevs_operational": 3, 00:08:48.493 "base_bdevs_list": [ 00:08:48.493 { 00:08:48.493 "name": null, 00:08:48.493 "uuid": "66d93dd5-952a-404c-bc44-69dfb9796350", 00:08:48.493 "is_configured": false, 00:08:48.493 "data_offset": 0, 00:08:48.493 "data_size": 63488 00:08:48.493 }, 00:08:48.493 { 00:08:48.493 "name": "BaseBdev2", 00:08:48.493 "uuid": "e104e927-f943-41d6-bccb-d3bdbe3218fb", 00:08:48.493 "is_configured": true, 00:08:48.493 "data_offset": 2048, 00:08:48.493 "data_size": 63488 00:08:48.493 }, 00:08:48.493 { 00:08:48.493 "name": "BaseBdev3", 00:08:48.493 "uuid": "18b032d6-5ee6-43a5-9674-84cb8f44ddcd", 00:08:48.493 "is_configured": true, 00:08:48.493 "data_offset": 2048, 00:08:48.493 "data_size": 63488 00:08:48.493 } 00:08:48.493 ] 00:08:48.493 }' 00:08:48.493 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.493 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.753 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.753 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:48.753 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.753 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.753 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.753 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:48.753 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:48.753 23:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.753 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.753 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.753 23:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 66d93dd5-952a-404c-bc44-69dfb9796350 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.753 [2024-11-18 23:04:08.037632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:48.753 [2024-11-18 23:04:08.037871] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:48.753 [2024-11-18 23:04:08.037911] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.753 NewBaseBdev 00:08:48.753 [2024-11-18 23:04:08.038231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:48.753 [2024-11-18 23:04:08.038396] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:48.753 [2024-11-18 23:04:08.038437] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.753 [2024-11-18 23:04:08.038588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.753 [ 00:08:48.753 { 00:08:48.753 "name": "NewBaseBdev", 00:08:48.753 "aliases": [ 00:08:48.753 "66d93dd5-952a-404c-bc44-69dfb9796350" 00:08:48.753 ], 00:08:48.753 "product_name": "Malloc disk", 00:08:48.753 "block_size": 512, 00:08:48.753 "num_blocks": 65536, 00:08:48.753 "uuid": "66d93dd5-952a-404c-bc44-69dfb9796350", 00:08:48.753 "assigned_rate_limits": { 00:08:48.753 "rw_ios_per_sec": 0, 00:08:48.753 "rw_mbytes_per_sec": 0, 00:08:48.753 "r_mbytes_per_sec": 0, 00:08:48.753 "w_mbytes_per_sec": 0 00:08:48.753 }, 00:08:48.753 "claimed": true, 00:08:48.753 "claim_type": "exclusive_write", 00:08:48.753 "zoned": false, 00:08:48.753 "supported_io_types": { 00:08:48.753 "read": true, 00:08:48.753 "write": true, 00:08:48.753 "unmap": true, 00:08:48.753 "flush": true, 00:08:48.753 "reset": true, 00:08:48.753 "nvme_admin": false, 00:08:48.753 "nvme_io": false, 00:08:48.753 "nvme_io_md": false, 00:08:48.753 "write_zeroes": true, 00:08:48.753 "zcopy": true, 00:08:48.753 "get_zone_info": false, 00:08:48.753 "zone_management": false, 00:08:48.753 "zone_append": false, 00:08:48.753 "compare": false, 00:08:48.753 "compare_and_write": false, 00:08:48.753 "abort": true, 00:08:48.753 "seek_hole": false, 00:08:48.753 "seek_data": false, 00:08:48.753 "copy": true, 00:08:48.753 "nvme_iov_md": false 00:08:48.753 }, 00:08:48.753 "memory_domains": [ 00:08:48.753 { 00:08:48.753 "dma_device_id": "system", 00:08:48.753 "dma_device_type": 1 00:08:48.753 }, 00:08:48.753 { 00:08:48.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.753 "dma_device_type": 2 00:08:48.753 } 00:08:48.753 ], 00:08:48.753 "driver_specific": {} 00:08:48.753 } 00:08:48.753 ] 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.753 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.753 "name": "Existed_Raid", 00:08:48.753 "uuid": "f1c6ee4d-d2a2-4925-a2ed-4ae636c09116", 00:08:48.753 "strip_size_kb": 64, 00:08:48.753 "state": "online", 00:08:48.753 "raid_level": "concat", 00:08:48.753 "superblock": true, 00:08:48.753 "num_base_bdevs": 3, 00:08:48.753 "num_base_bdevs_discovered": 3, 00:08:48.753 "num_base_bdevs_operational": 3, 00:08:48.753 "base_bdevs_list": [ 00:08:48.753 { 00:08:48.753 "name": "NewBaseBdev", 00:08:48.753 "uuid": "66d93dd5-952a-404c-bc44-69dfb9796350", 00:08:48.753 "is_configured": true, 00:08:48.753 "data_offset": 2048, 00:08:48.753 "data_size": 63488 00:08:48.753 }, 00:08:48.753 { 00:08:48.754 "name": "BaseBdev2", 00:08:48.754 "uuid": "e104e927-f943-41d6-bccb-d3bdbe3218fb", 00:08:48.754 "is_configured": true, 00:08:48.754 "data_offset": 2048, 00:08:48.754 "data_size": 63488 00:08:48.754 }, 00:08:48.754 { 00:08:48.754 "name": "BaseBdev3", 00:08:48.754 "uuid": "18b032d6-5ee6-43a5-9674-84cb8f44ddcd", 00:08:48.754 "is_configured": true, 00:08:48.754 "data_offset": 2048, 00:08:48.754 "data_size": 63488 00:08:48.754 } 00:08:48.754 ] 00:08:48.754 }' 00:08:48.754 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.754 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.324 [2024-11-18 23:04:08.421275] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.324 "name": "Existed_Raid", 00:08:49.324 "aliases": [ 00:08:49.324 "f1c6ee4d-d2a2-4925-a2ed-4ae636c09116" 00:08:49.324 ], 00:08:49.324 "product_name": "Raid Volume", 00:08:49.324 "block_size": 512, 00:08:49.324 "num_blocks": 190464, 00:08:49.324 "uuid": "f1c6ee4d-d2a2-4925-a2ed-4ae636c09116", 00:08:49.324 "assigned_rate_limits": { 00:08:49.324 "rw_ios_per_sec": 0, 00:08:49.324 "rw_mbytes_per_sec": 0, 00:08:49.324 "r_mbytes_per_sec": 0, 00:08:49.324 "w_mbytes_per_sec": 0 00:08:49.324 }, 00:08:49.324 "claimed": false, 00:08:49.324 "zoned": false, 00:08:49.324 "supported_io_types": { 00:08:49.324 "read": true, 00:08:49.324 "write": true, 00:08:49.324 "unmap": true, 00:08:49.324 "flush": true, 00:08:49.324 "reset": true, 00:08:49.324 "nvme_admin": false, 00:08:49.324 "nvme_io": false, 00:08:49.324 "nvme_io_md": false, 00:08:49.324 "write_zeroes": true, 00:08:49.324 "zcopy": false, 00:08:49.324 "get_zone_info": false, 00:08:49.324 "zone_management": false, 00:08:49.324 "zone_append": false, 00:08:49.324 "compare": false, 00:08:49.324 "compare_and_write": false, 00:08:49.324 "abort": false, 00:08:49.324 "seek_hole": false, 00:08:49.324 "seek_data": false, 00:08:49.324 "copy": false, 00:08:49.324 "nvme_iov_md": false 00:08:49.324 }, 00:08:49.324 "memory_domains": [ 00:08:49.324 { 00:08:49.324 "dma_device_id": "system", 00:08:49.324 "dma_device_type": 1 00:08:49.324 }, 00:08:49.324 { 00:08:49.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.324 "dma_device_type": 2 00:08:49.324 }, 00:08:49.324 { 00:08:49.324 "dma_device_id": "system", 00:08:49.324 "dma_device_type": 1 00:08:49.324 }, 00:08:49.324 { 00:08:49.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.324 "dma_device_type": 2 00:08:49.324 }, 00:08:49.324 { 00:08:49.324 "dma_device_id": "system", 00:08:49.324 "dma_device_type": 1 00:08:49.324 }, 00:08:49.324 { 00:08:49.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.324 "dma_device_type": 2 00:08:49.324 } 00:08:49.324 ], 00:08:49.324 "driver_specific": { 00:08:49.324 "raid": { 00:08:49.324 "uuid": "f1c6ee4d-d2a2-4925-a2ed-4ae636c09116", 00:08:49.324 "strip_size_kb": 64, 00:08:49.324 "state": "online", 00:08:49.324 "raid_level": "concat", 00:08:49.324 "superblock": true, 00:08:49.324 "num_base_bdevs": 3, 00:08:49.324 "num_base_bdevs_discovered": 3, 00:08:49.324 "num_base_bdevs_operational": 3, 00:08:49.324 "base_bdevs_list": [ 00:08:49.324 { 00:08:49.324 "name": "NewBaseBdev", 00:08:49.324 "uuid": "66d93dd5-952a-404c-bc44-69dfb9796350", 00:08:49.324 "is_configured": true, 00:08:49.324 "data_offset": 2048, 00:08:49.324 "data_size": 63488 00:08:49.324 }, 00:08:49.324 { 00:08:49.324 "name": "BaseBdev2", 00:08:49.324 "uuid": "e104e927-f943-41d6-bccb-d3bdbe3218fb", 00:08:49.324 "is_configured": true, 00:08:49.324 "data_offset": 2048, 00:08:49.324 "data_size": 63488 00:08:49.324 }, 00:08:49.324 { 00:08:49.324 "name": "BaseBdev3", 00:08:49.324 "uuid": "18b032d6-5ee6-43a5-9674-84cb8f44ddcd", 00:08:49.324 "is_configured": true, 00:08:49.324 "data_offset": 2048, 00:08:49.324 "data_size": 63488 00:08:49.324 } 00:08:49.324 ] 00:08:49.324 } 00:08:49.324 } 00:08:49.324 }' 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:49.324 BaseBdev2 00:08:49.324 BaseBdev3' 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.324 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.325 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.325 [2024-11-18 23:04:08.696532] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.325 [2024-11-18 23:04:08.696596] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.325 [2024-11-18 23:04:08.696677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.325 [2024-11-18 23:04:08.696747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.325 [2024-11-18 23:04:08.696782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77328 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77328 ']' 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77328 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77328 00:08:49.585 killing process with pid 77328 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77328' 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77328 00:08:49.585 [2024-11-18 23:04:08.744653] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.585 23:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77328 00:08:49.585 [2024-11-18 23:04:08.774591] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.847 23:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:49.847 00:08:49.847 real 0m8.300s 00:08:49.847 user 0m14.108s 00:08:49.847 sys 0m1.660s 00:08:49.847 23:04:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.847 23:04:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.847 ************************************ 00:08:49.847 END TEST raid_state_function_test_sb 00:08:49.847 ************************************ 00:08:49.847 23:04:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:49.847 23:04:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:49.847 23:04:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.847 23:04:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.847 ************************************ 00:08:49.847 START TEST raid_superblock_test 00:08:49.847 ************************************ 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77926 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77926 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77926 ']' 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.847 23:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.847 [2024-11-18 23:04:09.174845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:49.847 [2024-11-18 23:04:09.174984] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77926 ] 00:08:50.107 [2024-11-18 23:04:09.335124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.107 [2024-11-18 23:04:09.379630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.107 [2024-11-18 23:04:09.421781] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.107 [2024-11-18 23:04:09.421815] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.675 23:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.675 23:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:50.675 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:50.675 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.675 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:50.675 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:50.675 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:50.676 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.676 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.676 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.676 23:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:50.676 23:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.676 23:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.676 malloc1 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.676 [2024-11-18 23:04:10.015961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:50.676 [2024-11-18 23:04:10.016095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.676 [2024-11-18 23:04:10.016137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:50.676 [2024-11-18 23:04:10.016174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.676 [2024-11-18 23:04:10.018261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.676 [2024-11-18 23:04:10.018362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:50.676 pt1 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.676 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.936 malloc2 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.936 [2024-11-18 23:04:10.062268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:50.936 [2024-11-18 23:04:10.062484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.936 [2024-11-18 23:04:10.062565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:50.936 [2024-11-18 23:04:10.062659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.936 [2024-11-18 23:04:10.067545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.936 [2024-11-18 23:04:10.067696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:50.936 pt2 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.936 malloc3 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.936 [2024-11-18 23:04:10.097136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:50.936 [2024-11-18 23:04:10.097221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.936 [2024-11-18 23:04:10.097255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:50.936 [2024-11-18 23:04:10.097293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.936 [2024-11-18 23:04:10.099389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.936 [2024-11-18 23:04:10.099474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:50.936 pt3 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.936 [2024-11-18 23:04:10.109167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:50.936 [2024-11-18 23:04:10.111053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:50.936 [2024-11-18 23:04:10.111177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:50.936 [2024-11-18 23:04:10.111360] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:50.936 [2024-11-18 23:04:10.111407] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:50.936 [2024-11-18 23:04:10.111685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:50.936 [2024-11-18 23:04:10.111850] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:50.936 [2024-11-18 23:04:10.111897] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:50.936 [2024-11-18 23:04:10.112023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.936 "name": "raid_bdev1", 00:08:50.936 "uuid": "5e289561-da3e-4dd4-a575-53a74536c607", 00:08:50.936 "strip_size_kb": 64, 00:08:50.936 "state": "online", 00:08:50.936 "raid_level": "concat", 00:08:50.936 "superblock": true, 00:08:50.936 "num_base_bdevs": 3, 00:08:50.936 "num_base_bdevs_discovered": 3, 00:08:50.936 "num_base_bdevs_operational": 3, 00:08:50.936 "base_bdevs_list": [ 00:08:50.936 { 00:08:50.936 "name": "pt1", 00:08:50.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.936 "is_configured": true, 00:08:50.936 "data_offset": 2048, 00:08:50.936 "data_size": 63488 00:08:50.936 }, 00:08:50.936 { 00:08:50.936 "name": "pt2", 00:08:50.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.936 "is_configured": true, 00:08:50.936 "data_offset": 2048, 00:08:50.936 "data_size": 63488 00:08:50.936 }, 00:08:50.936 { 00:08:50.936 "name": "pt3", 00:08:50.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.936 "is_configured": true, 00:08:50.936 "data_offset": 2048, 00:08:50.936 "data_size": 63488 00:08:50.936 } 00:08:50.936 ] 00:08:50.936 }' 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.936 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.196 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:51.196 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:51.196 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.196 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.196 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.196 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.196 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.196 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.196 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.196 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.196 [2024-11-18 23:04:10.536690] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.196 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.456 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.456 "name": "raid_bdev1", 00:08:51.456 "aliases": [ 00:08:51.456 "5e289561-da3e-4dd4-a575-53a74536c607" 00:08:51.456 ], 00:08:51.456 "product_name": "Raid Volume", 00:08:51.456 "block_size": 512, 00:08:51.456 "num_blocks": 190464, 00:08:51.456 "uuid": "5e289561-da3e-4dd4-a575-53a74536c607", 00:08:51.456 "assigned_rate_limits": { 00:08:51.456 "rw_ios_per_sec": 0, 00:08:51.456 "rw_mbytes_per_sec": 0, 00:08:51.456 "r_mbytes_per_sec": 0, 00:08:51.456 "w_mbytes_per_sec": 0 00:08:51.456 }, 00:08:51.456 "claimed": false, 00:08:51.456 "zoned": false, 00:08:51.456 "supported_io_types": { 00:08:51.456 "read": true, 00:08:51.456 "write": true, 00:08:51.456 "unmap": true, 00:08:51.456 "flush": true, 00:08:51.456 "reset": true, 00:08:51.456 "nvme_admin": false, 00:08:51.456 "nvme_io": false, 00:08:51.456 "nvme_io_md": false, 00:08:51.456 "write_zeroes": true, 00:08:51.456 "zcopy": false, 00:08:51.456 "get_zone_info": false, 00:08:51.456 "zone_management": false, 00:08:51.456 "zone_append": false, 00:08:51.456 "compare": false, 00:08:51.456 "compare_and_write": false, 00:08:51.456 "abort": false, 00:08:51.456 "seek_hole": false, 00:08:51.456 "seek_data": false, 00:08:51.456 "copy": false, 00:08:51.456 "nvme_iov_md": false 00:08:51.456 }, 00:08:51.456 "memory_domains": [ 00:08:51.456 { 00:08:51.456 "dma_device_id": "system", 00:08:51.456 "dma_device_type": 1 00:08:51.456 }, 00:08:51.456 { 00:08:51.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.456 "dma_device_type": 2 00:08:51.456 }, 00:08:51.457 { 00:08:51.457 "dma_device_id": "system", 00:08:51.457 "dma_device_type": 1 00:08:51.457 }, 00:08:51.457 { 00:08:51.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.457 "dma_device_type": 2 00:08:51.457 }, 00:08:51.457 { 00:08:51.457 "dma_device_id": "system", 00:08:51.457 "dma_device_type": 1 00:08:51.457 }, 00:08:51.457 { 00:08:51.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.457 "dma_device_type": 2 00:08:51.457 } 00:08:51.457 ], 00:08:51.457 "driver_specific": { 00:08:51.457 "raid": { 00:08:51.457 "uuid": "5e289561-da3e-4dd4-a575-53a74536c607", 00:08:51.457 "strip_size_kb": 64, 00:08:51.457 "state": "online", 00:08:51.457 "raid_level": "concat", 00:08:51.457 "superblock": true, 00:08:51.457 "num_base_bdevs": 3, 00:08:51.457 "num_base_bdevs_discovered": 3, 00:08:51.457 "num_base_bdevs_operational": 3, 00:08:51.457 "base_bdevs_list": [ 00:08:51.457 { 00:08:51.457 "name": "pt1", 00:08:51.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.457 "is_configured": true, 00:08:51.457 "data_offset": 2048, 00:08:51.457 "data_size": 63488 00:08:51.457 }, 00:08:51.457 { 00:08:51.457 "name": "pt2", 00:08:51.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.457 "is_configured": true, 00:08:51.457 "data_offset": 2048, 00:08:51.457 "data_size": 63488 00:08:51.457 }, 00:08:51.457 { 00:08:51.457 "name": "pt3", 00:08:51.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:51.457 "is_configured": true, 00:08:51.457 "data_offset": 2048, 00:08:51.457 "data_size": 63488 00:08:51.457 } 00:08:51.457 ] 00:08:51.457 } 00:08:51.457 } 00:08:51.457 }' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:51.457 pt2 00:08:51.457 pt3' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.457 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.457 [2024-11-18 23:04:10.820137] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5e289561-da3e-4dd4-a575-53a74536c607 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5e289561-da3e-4dd4-a575-53a74536c607 ']' 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 [2024-11-18 23:04:10.863800] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.718 [2024-11-18 23:04:10.863824] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.718 [2024-11-18 23:04:10.863886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.718 [2024-11-18 23:04:10.863944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.718 [2024-11-18 23:04:10.863959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.718 23:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 [2024-11-18 23:04:11.015566] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:51.718 [2024-11-18 23:04:11.017367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:51.718 [2024-11-18 23:04:11.017406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:51.718 [2024-11-18 23:04:11.017453] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:51.718 [2024-11-18 23:04:11.017500] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:51.718 [2024-11-18 23:04:11.017518] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:51.718 [2024-11-18 23:04:11.017530] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.718 [2024-11-18 23:04:11.017540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:51.718 request: 00:08:51.718 { 00:08:51.718 "name": "raid_bdev1", 00:08:51.718 "raid_level": "concat", 00:08:51.718 "base_bdevs": [ 00:08:51.718 "malloc1", 00:08:51.718 "malloc2", 00:08:51.718 "malloc3" 00:08:51.718 ], 00:08:51.718 "strip_size_kb": 64, 00:08:51.718 "superblock": false, 00:08:51.718 "method": "bdev_raid_create", 00:08:51.718 "req_id": 1 00:08:51.718 } 00:08:51.718 Got JSON-RPC error response 00:08:51.718 response: 00:08:51.718 { 00:08:51.718 "code": -17, 00:08:51.718 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:51.718 } 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.718 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 [2024-11-18 23:04:11.075425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:51.718 [2024-11-18 23:04:11.075509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.718 [2024-11-18 23:04:11.075540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:51.718 [2024-11-18 23:04:11.075568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.718 [2024-11-18 23:04:11.077657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.718 [2024-11-18 23:04:11.077724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.718 [2024-11-18 23:04:11.077805] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:51.719 [2024-11-18 23:04:11.077871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.719 pt1 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.719 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.978 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.978 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.978 "name": "raid_bdev1", 00:08:51.979 "uuid": "5e289561-da3e-4dd4-a575-53a74536c607", 00:08:51.979 "strip_size_kb": 64, 00:08:51.979 "state": "configuring", 00:08:51.979 "raid_level": "concat", 00:08:51.979 "superblock": true, 00:08:51.979 "num_base_bdevs": 3, 00:08:51.979 "num_base_bdevs_discovered": 1, 00:08:51.979 "num_base_bdevs_operational": 3, 00:08:51.979 "base_bdevs_list": [ 00:08:51.979 { 00:08:51.979 "name": "pt1", 00:08:51.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.979 "is_configured": true, 00:08:51.979 "data_offset": 2048, 00:08:51.979 "data_size": 63488 00:08:51.979 }, 00:08:51.979 { 00:08:51.979 "name": null, 00:08:51.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.979 "is_configured": false, 00:08:51.979 "data_offset": 2048, 00:08:51.979 "data_size": 63488 00:08:51.979 }, 00:08:51.979 { 00:08:51.979 "name": null, 00:08:51.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:51.979 "is_configured": false, 00:08:51.979 "data_offset": 2048, 00:08:51.979 "data_size": 63488 00:08:51.979 } 00:08:51.979 ] 00:08:51.979 }' 00:08:51.979 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.979 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.238 [2024-11-18 23:04:11.430893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:52.238 [2024-11-18 23:04:11.430950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.238 [2024-11-18 23:04:11.430983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:52.238 [2024-11-18 23:04:11.430996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.238 [2024-11-18 23:04:11.431394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.238 [2024-11-18 23:04:11.431416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:52.238 [2024-11-18 23:04:11.431486] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:52.238 [2024-11-18 23:04:11.431510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:52.238 pt2 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.238 [2024-11-18 23:04:11.442882] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.238 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.239 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.239 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.239 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.239 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.239 "name": "raid_bdev1", 00:08:52.239 "uuid": "5e289561-da3e-4dd4-a575-53a74536c607", 00:08:52.239 "strip_size_kb": 64, 00:08:52.239 "state": "configuring", 00:08:52.239 "raid_level": "concat", 00:08:52.239 "superblock": true, 00:08:52.239 "num_base_bdevs": 3, 00:08:52.239 "num_base_bdevs_discovered": 1, 00:08:52.239 "num_base_bdevs_operational": 3, 00:08:52.239 "base_bdevs_list": [ 00:08:52.239 { 00:08:52.239 "name": "pt1", 00:08:52.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.239 "is_configured": true, 00:08:52.239 "data_offset": 2048, 00:08:52.239 "data_size": 63488 00:08:52.239 }, 00:08:52.239 { 00:08:52.239 "name": null, 00:08:52.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.239 "is_configured": false, 00:08:52.239 "data_offset": 0, 00:08:52.239 "data_size": 63488 00:08:52.239 }, 00:08:52.239 { 00:08:52.239 "name": null, 00:08:52.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:52.239 "is_configured": false, 00:08:52.239 "data_offset": 2048, 00:08:52.239 "data_size": 63488 00:08:52.239 } 00:08:52.239 ] 00:08:52.239 }' 00:08:52.239 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.239 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.499 [2024-11-18 23:04:11.850157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:52.499 [2024-11-18 23:04:11.850260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.499 [2024-11-18 23:04:11.850304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:52.499 [2024-11-18 23:04:11.850333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.499 [2024-11-18 23:04:11.850696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.499 [2024-11-18 23:04:11.850748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:52.499 [2024-11-18 23:04:11.850835] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:52.499 [2024-11-18 23:04:11.850881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:52.499 pt2 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.499 [2024-11-18 23:04:11.862121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:52.499 [2024-11-18 23:04:11.862211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.499 [2024-11-18 23:04:11.862243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:52.499 [2024-11-18 23:04:11.862269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.499 [2024-11-18 23:04:11.862599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.499 [2024-11-18 23:04:11.862650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:52.499 [2024-11-18 23:04:11.862727] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:52.499 [2024-11-18 23:04:11.862770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:52.499 [2024-11-18 23:04:11.862881] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:52.499 [2024-11-18 23:04:11.862915] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:52.499 [2024-11-18 23:04:11.863152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:52.499 [2024-11-18 23:04:11.863330] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:52.499 [2024-11-18 23:04:11.863372] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:52.499 [2024-11-18 23:04:11.863503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.499 pt3 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.499 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.758 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.758 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.758 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.758 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.758 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.758 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.758 "name": "raid_bdev1", 00:08:52.758 "uuid": "5e289561-da3e-4dd4-a575-53a74536c607", 00:08:52.758 "strip_size_kb": 64, 00:08:52.758 "state": "online", 00:08:52.758 "raid_level": "concat", 00:08:52.758 "superblock": true, 00:08:52.758 "num_base_bdevs": 3, 00:08:52.758 "num_base_bdevs_discovered": 3, 00:08:52.758 "num_base_bdevs_operational": 3, 00:08:52.758 "base_bdevs_list": [ 00:08:52.758 { 00:08:52.758 "name": "pt1", 00:08:52.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.758 "is_configured": true, 00:08:52.758 "data_offset": 2048, 00:08:52.758 "data_size": 63488 00:08:52.758 }, 00:08:52.758 { 00:08:52.758 "name": "pt2", 00:08:52.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.758 "is_configured": true, 00:08:52.758 "data_offset": 2048, 00:08:52.758 "data_size": 63488 00:08:52.758 }, 00:08:52.758 { 00:08:52.758 "name": "pt3", 00:08:52.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:52.758 "is_configured": true, 00:08:52.758 "data_offset": 2048, 00:08:52.758 "data_size": 63488 00:08:52.758 } 00:08:52.758 ] 00:08:52.758 }' 00:08:52.758 23:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.758 23:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.019 [2024-11-18 23:04:12.337589] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.019 "name": "raid_bdev1", 00:08:53.019 "aliases": [ 00:08:53.019 "5e289561-da3e-4dd4-a575-53a74536c607" 00:08:53.019 ], 00:08:53.019 "product_name": "Raid Volume", 00:08:53.019 "block_size": 512, 00:08:53.019 "num_blocks": 190464, 00:08:53.019 "uuid": "5e289561-da3e-4dd4-a575-53a74536c607", 00:08:53.019 "assigned_rate_limits": { 00:08:53.019 "rw_ios_per_sec": 0, 00:08:53.019 "rw_mbytes_per_sec": 0, 00:08:53.019 "r_mbytes_per_sec": 0, 00:08:53.019 "w_mbytes_per_sec": 0 00:08:53.019 }, 00:08:53.019 "claimed": false, 00:08:53.019 "zoned": false, 00:08:53.019 "supported_io_types": { 00:08:53.019 "read": true, 00:08:53.019 "write": true, 00:08:53.019 "unmap": true, 00:08:53.019 "flush": true, 00:08:53.019 "reset": true, 00:08:53.019 "nvme_admin": false, 00:08:53.019 "nvme_io": false, 00:08:53.019 "nvme_io_md": false, 00:08:53.019 "write_zeroes": true, 00:08:53.019 "zcopy": false, 00:08:53.019 "get_zone_info": false, 00:08:53.019 "zone_management": false, 00:08:53.019 "zone_append": false, 00:08:53.019 "compare": false, 00:08:53.019 "compare_and_write": false, 00:08:53.019 "abort": false, 00:08:53.019 "seek_hole": false, 00:08:53.019 "seek_data": false, 00:08:53.019 "copy": false, 00:08:53.019 "nvme_iov_md": false 00:08:53.019 }, 00:08:53.019 "memory_domains": [ 00:08:53.019 { 00:08:53.019 "dma_device_id": "system", 00:08:53.019 "dma_device_type": 1 00:08:53.019 }, 00:08:53.019 { 00:08:53.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.019 "dma_device_type": 2 00:08:53.019 }, 00:08:53.019 { 00:08:53.019 "dma_device_id": "system", 00:08:53.019 "dma_device_type": 1 00:08:53.019 }, 00:08:53.019 { 00:08:53.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.019 "dma_device_type": 2 00:08:53.019 }, 00:08:53.019 { 00:08:53.019 "dma_device_id": "system", 00:08:53.019 "dma_device_type": 1 00:08:53.019 }, 00:08:53.019 { 00:08:53.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.019 "dma_device_type": 2 00:08:53.019 } 00:08:53.019 ], 00:08:53.019 "driver_specific": { 00:08:53.019 "raid": { 00:08:53.019 "uuid": "5e289561-da3e-4dd4-a575-53a74536c607", 00:08:53.019 "strip_size_kb": 64, 00:08:53.019 "state": "online", 00:08:53.019 "raid_level": "concat", 00:08:53.019 "superblock": true, 00:08:53.019 "num_base_bdevs": 3, 00:08:53.019 "num_base_bdevs_discovered": 3, 00:08:53.019 "num_base_bdevs_operational": 3, 00:08:53.019 "base_bdevs_list": [ 00:08:53.019 { 00:08:53.019 "name": "pt1", 00:08:53.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.019 "is_configured": true, 00:08:53.019 "data_offset": 2048, 00:08:53.019 "data_size": 63488 00:08:53.019 }, 00:08:53.019 { 00:08:53.019 "name": "pt2", 00:08:53.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.019 "is_configured": true, 00:08:53.019 "data_offset": 2048, 00:08:53.019 "data_size": 63488 00:08:53.019 }, 00:08:53.019 { 00:08:53.019 "name": "pt3", 00:08:53.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:53.019 "is_configured": true, 00:08:53.019 "data_offset": 2048, 00:08:53.019 "data_size": 63488 00:08:53.019 } 00:08:53.019 ] 00:08:53.019 } 00:08:53.019 } 00:08:53.019 }' 00:08:53.019 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:53.279 pt2 00:08:53.279 pt3' 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.279 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:53.280 [2024-11-18 23:04:12.613055] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5e289561-da3e-4dd4-a575-53a74536c607 '!=' 5e289561-da3e-4dd4-a575-53a74536c607 ']' 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77926 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77926 ']' 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77926 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.280 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77926 00:08:53.539 killing process with pid 77926 00:08:53.539 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.539 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.539 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77926' 00:08:53.539 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77926 00:08:53.539 [2024-11-18 23:04:12.674740] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.539 [2024-11-18 23:04:12.674815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.539 [2024-11-18 23:04:12.674873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.539 [2024-11-18 23:04:12.674882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:53.539 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77926 00:08:53.539 [2024-11-18 23:04:12.707154] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.800 ************************************ 00:08:53.800 END TEST raid_superblock_test 00:08:53.800 ************************************ 00:08:53.800 23:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:53.800 00:08:53.800 real 0m3.852s 00:08:53.800 user 0m6.041s 00:08:53.800 sys 0m0.816s 00:08:53.800 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.800 23:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.800 23:04:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:53.800 23:04:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:53.800 23:04:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.800 23:04:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.800 ************************************ 00:08:53.800 START TEST raid_read_error_test 00:08:53.800 ************************************ 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.O38RDcu0FY 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78163 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78163 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78163 ']' 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.800 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.800 [2024-11-18 23:04:13.117619] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:53.800 [2024-11-18 23:04:13.117823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78163 ] 00:08:54.060 [2024-11-18 23:04:13.278405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.060 [2024-11-18 23:04:13.324127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.060 [2024-11-18 23:04:13.366799] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.060 [2024-11-18 23:04:13.366888] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.630 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.630 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:54.630 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.630 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:54.630 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.630 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.630 BaseBdev1_malloc 00:08:54.630 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.630 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:54.630 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.631 true 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.631 [2024-11-18 23:04:13.957229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:54.631 [2024-11-18 23:04:13.957333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.631 [2024-11-18 23:04:13.957372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:54.631 [2024-11-18 23:04:13.957381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.631 [2024-11-18 23:04:13.959453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.631 [2024-11-18 23:04:13.959484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:54.631 BaseBdev1 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.631 BaseBdev2_malloc 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.631 23:04:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.891 true 00:08:54.891 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.891 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:54.891 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.891 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.891 [2024-11-18 23:04:14.014409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:54.891 [2024-11-18 23:04:14.014477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.891 [2024-11-18 23:04:14.014504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:54.891 [2024-11-18 23:04:14.014517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.891 [2024-11-18 23:04:14.017599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.891 [2024-11-18 23:04:14.017646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:54.891 BaseBdev2 00:08:54.891 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.891 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.891 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:54.891 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.891 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.891 BaseBdev3_malloc 00:08:54.891 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.891 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.892 true 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.892 [2024-11-18 23:04:14.055082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:54.892 [2024-11-18 23:04:14.055184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.892 [2024-11-18 23:04:14.055222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:54.892 [2024-11-18 23:04:14.055230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.892 [2024-11-18 23:04:14.057227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.892 [2024-11-18 23:04:14.057262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:54.892 BaseBdev3 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.892 [2024-11-18 23:04:14.067118] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.892 [2024-11-18 23:04:14.068973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.892 [2024-11-18 23:04:14.069061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.892 [2024-11-18 23:04:14.069236] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:54.892 [2024-11-18 23:04:14.069250] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.892 [2024-11-18 23:04:14.069509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:54.892 [2024-11-18 23:04:14.069630] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:54.892 [2024-11-18 23:04:14.069646] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:54.892 [2024-11-18 23:04:14.069783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.892 "name": "raid_bdev1", 00:08:54.892 "uuid": "a953dded-7b17-47a2-a9c9-cf9ba48d266d", 00:08:54.892 "strip_size_kb": 64, 00:08:54.892 "state": "online", 00:08:54.892 "raid_level": "concat", 00:08:54.892 "superblock": true, 00:08:54.892 "num_base_bdevs": 3, 00:08:54.892 "num_base_bdevs_discovered": 3, 00:08:54.892 "num_base_bdevs_operational": 3, 00:08:54.892 "base_bdevs_list": [ 00:08:54.892 { 00:08:54.892 "name": "BaseBdev1", 00:08:54.892 "uuid": "0814d380-1ddc-5b40-9651-0e358d06c115", 00:08:54.892 "is_configured": true, 00:08:54.892 "data_offset": 2048, 00:08:54.892 "data_size": 63488 00:08:54.892 }, 00:08:54.892 { 00:08:54.892 "name": "BaseBdev2", 00:08:54.892 "uuid": "271740c1-5d1d-5844-92c3-569df276843b", 00:08:54.892 "is_configured": true, 00:08:54.892 "data_offset": 2048, 00:08:54.892 "data_size": 63488 00:08:54.892 }, 00:08:54.892 { 00:08:54.892 "name": "BaseBdev3", 00:08:54.892 "uuid": "132d01f8-3570-5c2a-919e-323ddd58a3e1", 00:08:54.892 "is_configured": true, 00:08:54.892 "data_offset": 2048, 00:08:54.892 "data_size": 63488 00:08:54.892 } 00:08:54.892 ] 00:08:54.892 }' 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.892 23:04:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.152 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:55.152 23:04:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:55.427 [2024-11-18 23:04:14.558726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:56.120 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:56.120 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.120 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.381 "name": "raid_bdev1", 00:08:56.381 "uuid": "a953dded-7b17-47a2-a9c9-cf9ba48d266d", 00:08:56.381 "strip_size_kb": 64, 00:08:56.381 "state": "online", 00:08:56.381 "raid_level": "concat", 00:08:56.381 "superblock": true, 00:08:56.381 "num_base_bdevs": 3, 00:08:56.381 "num_base_bdevs_discovered": 3, 00:08:56.381 "num_base_bdevs_operational": 3, 00:08:56.381 "base_bdevs_list": [ 00:08:56.381 { 00:08:56.381 "name": "BaseBdev1", 00:08:56.381 "uuid": "0814d380-1ddc-5b40-9651-0e358d06c115", 00:08:56.381 "is_configured": true, 00:08:56.381 "data_offset": 2048, 00:08:56.381 "data_size": 63488 00:08:56.381 }, 00:08:56.381 { 00:08:56.381 "name": "BaseBdev2", 00:08:56.381 "uuid": "271740c1-5d1d-5844-92c3-569df276843b", 00:08:56.381 "is_configured": true, 00:08:56.381 "data_offset": 2048, 00:08:56.381 "data_size": 63488 00:08:56.381 }, 00:08:56.381 { 00:08:56.381 "name": "BaseBdev3", 00:08:56.381 "uuid": "132d01f8-3570-5c2a-919e-323ddd58a3e1", 00:08:56.381 "is_configured": true, 00:08:56.381 "data_offset": 2048, 00:08:56.381 "data_size": 63488 00:08:56.381 } 00:08:56.381 ] 00:08:56.381 }' 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.381 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.641 [2024-11-18 23:04:15.948697] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.641 [2024-11-18 23:04:15.948731] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.641 [2024-11-18 23:04:15.951090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.641 [2024-11-18 23:04:15.951135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.641 [2024-11-18 23:04:15.951175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.641 [2024-11-18 23:04:15.951186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:56.641 { 00:08:56.641 "results": [ 00:08:56.641 { 00:08:56.641 "job": "raid_bdev1", 00:08:56.641 "core_mask": "0x1", 00:08:56.641 "workload": "randrw", 00:08:56.641 "percentage": 50, 00:08:56.641 "status": "finished", 00:08:56.641 "queue_depth": 1, 00:08:56.641 "io_size": 131072, 00:08:56.641 "runtime": 1.390824, 00:08:56.641 "iops": 17528.457950107273, 00:08:56.641 "mibps": 2191.057243763409, 00:08:56.641 "io_failed": 1, 00:08:56.641 "io_timeout": 0, 00:08:56.641 "avg_latency_us": 79.04273400417695, 00:08:56.641 "min_latency_us": 24.482096069868994, 00:08:56.641 "max_latency_us": 1387.989519650655 00:08:56.641 } 00:08:56.641 ], 00:08:56.641 "core_count": 1 00:08:56.641 } 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78163 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78163 ']' 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78163 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78163 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78163' 00:08:56.641 killing process with pid 78163 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78163 00:08:56.641 [2024-11-18 23:04:15.995949] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.641 23:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78163 00:08:56.902 [2024-11-18 23:04:16.020646] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.902 23:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:56.902 23:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.O38RDcu0FY 00:08:56.902 23:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:56.902 23:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:56.902 23:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:56.902 23:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.902 23:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.902 23:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:56.902 00:08:56.902 real 0m3.247s 00:08:56.902 user 0m4.054s 00:08:56.902 sys 0m0.524s 00:08:56.902 23:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.902 23:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.902 ************************************ 00:08:56.902 END TEST raid_read_error_test 00:08:56.902 ************************************ 00:08:57.163 23:04:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:57.163 23:04:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:57.163 23:04:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.163 23:04:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.163 ************************************ 00:08:57.163 START TEST raid_write_error_test 00:08:57.163 ************************************ 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iRdGNR29x3 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78292 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78292 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78292 ']' 00:08:57.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:57.163 23:04:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.163 [2024-11-18 23:04:16.439864] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:57.163 [2024-11-18 23:04:16.440078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78292 ] 00:08:57.423 [2024-11-18 23:04:16.597525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.423 [2024-11-18 23:04:16.641373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.423 [2024-11-18 23:04:16.683513] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.423 [2024-11-18 23:04:16.683550] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.993 BaseBdev1_malloc 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.993 true 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.993 [2024-11-18 23:04:17.281659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:57.993 [2024-11-18 23:04:17.281764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.993 [2024-11-18 23:04:17.281790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:57.993 [2024-11-18 23:04:17.281807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.993 [2024-11-18 23:04:17.283891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.993 [2024-11-18 23:04:17.283928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:57.993 BaseBdev1 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.993 BaseBdev2_malloc 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.993 true 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.993 [2024-11-18 23:04:17.338370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:57.993 [2024-11-18 23:04:17.338438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.993 [2024-11-18 23:04:17.338466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:57.993 [2024-11-18 23:04:17.338479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.993 [2024-11-18 23:04:17.341555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.993 [2024-11-18 23:04:17.341602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:57.993 BaseBdev2 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.993 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.994 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:57.994 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.994 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.994 BaseBdev3_malloc 00:08:57.994 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.994 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:57.994 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.994 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.254 true 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.254 [2024-11-18 23:04:17.379212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:58.254 [2024-11-18 23:04:17.379257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.254 [2024-11-18 23:04:17.379275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:58.254 [2024-11-18 23:04:17.379299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.254 [2024-11-18 23:04:17.381374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.254 [2024-11-18 23:04:17.381403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:58.254 BaseBdev3 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.254 [2024-11-18 23:04:17.391250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.254 [2024-11-18 23:04:17.393142] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.254 [2024-11-18 23:04:17.393214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.254 [2024-11-18 23:04:17.393382] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:58.254 [2024-11-18 23:04:17.393396] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:58.254 [2024-11-18 23:04:17.393612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:58.254 [2024-11-18 23:04:17.393730] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:58.254 [2024-11-18 23:04:17.393743] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:58.254 [2024-11-18 23:04:17.393868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.254 "name": "raid_bdev1", 00:08:58.254 "uuid": "956fbc99-2cab-4ff0-bfd0-17c26fa85965", 00:08:58.254 "strip_size_kb": 64, 00:08:58.254 "state": "online", 00:08:58.254 "raid_level": "concat", 00:08:58.254 "superblock": true, 00:08:58.254 "num_base_bdevs": 3, 00:08:58.254 "num_base_bdevs_discovered": 3, 00:08:58.254 "num_base_bdevs_operational": 3, 00:08:58.254 "base_bdevs_list": [ 00:08:58.254 { 00:08:58.254 "name": "BaseBdev1", 00:08:58.254 "uuid": "057aea06-9ef5-57c4-a51d-5b149f3c34c8", 00:08:58.254 "is_configured": true, 00:08:58.254 "data_offset": 2048, 00:08:58.254 "data_size": 63488 00:08:58.254 }, 00:08:58.254 { 00:08:58.254 "name": "BaseBdev2", 00:08:58.254 "uuid": "b5deb45d-c7a9-5841-a722-960846201381", 00:08:58.254 "is_configured": true, 00:08:58.254 "data_offset": 2048, 00:08:58.254 "data_size": 63488 00:08:58.254 }, 00:08:58.254 { 00:08:58.254 "name": "BaseBdev3", 00:08:58.254 "uuid": "16f7b6aa-070e-51ec-8224-59b820f04a43", 00:08:58.254 "is_configured": true, 00:08:58.254 "data_offset": 2048, 00:08:58.254 "data_size": 63488 00:08:58.254 } 00:08:58.254 ] 00:08:58.254 }' 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.254 23:04:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.515 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:58.515 23:04:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:58.515 [2024-11-18 23:04:17.870786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.454 23:04:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.723 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.724 "name": "raid_bdev1", 00:08:59.724 "uuid": "956fbc99-2cab-4ff0-bfd0-17c26fa85965", 00:08:59.724 "strip_size_kb": 64, 00:08:59.724 "state": "online", 00:08:59.724 "raid_level": "concat", 00:08:59.724 "superblock": true, 00:08:59.724 "num_base_bdevs": 3, 00:08:59.724 "num_base_bdevs_discovered": 3, 00:08:59.724 "num_base_bdevs_operational": 3, 00:08:59.724 "base_bdevs_list": [ 00:08:59.724 { 00:08:59.724 "name": "BaseBdev1", 00:08:59.724 "uuid": "057aea06-9ef5-57c4-a51d-5b149f3c34c8", 00:08:59.724 "is_configured": true, 00:08:59.724 "data_offset": 2048, 00:08:59.724 "data_size": 63488 00:08:59.724 }, 00:08:59.724 { 00:08:59.724 "name": "BaseBdev2", 00:08:59.724 "uuid": "b5deb45d-c7a9-5841-a722-960846201381", 00:08:59.724 "is_configured": true, 00:08:59.724 "data_offset": 2048, 00:08:59.724 "data_size": 63488 00:08:59.724 }, 00:08:59.724 { 00:08:59.724 "name": "BaseBdev3", 00:08:59.724 "uuid": "16f7b6aa-070e-51ec-8224-59b820f04a43", 00:08:59.724 "is_configured": true, 00:08:59.724 "data_offset": 2048, 00:08:59.724 "data_size": 63488 00:08:59.724 } 00:08:59.724 ] 00:08:59.724 }' 00:08:59.724 23:04:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.724 23:04:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.990 23:04:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:59.990 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.990 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.990 [2024-11-18 23:04:19.218183] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.991 [2024-11-18 23:04:19.218218] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.991 [2024-11-18 23:04:19.220654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.991 [2024-11-18 23:04:19.220751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.991 [2024-11-18 23:04:19.220792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.991 [2024-11-18 23:04:19.220802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:59.991 { 00:08:59.991 "results": [ 00:08:59.991 { 00:08:59.991 "job": "raid_bdev1", 00:08:59.991 "core_mask": "0x1", 00:08:59.991 "workload": "randrw", 00:08:59.991 "percentage": 50, 00:08:59.991 "status": "finished", 00:08:59.991 "queue_depth": 1, 00:08:59.991 "io_size": 131072, 00:08:59.991 "runtime": 1.348218, 00:08:59.991 "iops": 17609.170030366007, 00:08:59.991 "mibps": 2201.146253795751, 00:08:59.991 "io_failed": 1, 00:08:59.991 "io_timeout": 0, 00:08:59.991 "avg_latency_us": 78.70917339566276, 00:08:59.991 "min_latency_us": 24.258515283842794, 00:08:59.991 "max_latency_us": 1380.8349344978167 00:08:59.991 } 00:08:59.991 ], 00:08:59.991 "core_count": 1 00:08:59.991 } 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78292 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78292 ']' 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78292 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78292 00:08:59.991 killing process with pid 78292 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78292' 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78292 00:08:59.991 [2024-11-18 23:04:19.255622] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.991 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78292 00:08:59.991 [2024-11-18 23:04:19.280589] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.250 23:04:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iRdGNR29x3 00:09:00.250 23:04:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:00.250 23:04:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:00.250 ************************************ 00:09:00.250 END TEST raid_write_error_test 00:09:00.250 ************************************ 00:09:00.250 23:04:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:00.250 23:04:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:00.250 23:04:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.250 23:04:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.250 23:04:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:00.250 00:09:00.250 real 0m3.183s 00:09:00.250 user 0m3.940s 00:09:00.250 sys 0m0.521s 00:09:00.250 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.250 23:04:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.250 23:04:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:00.250 23:04:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:00.250 23:04:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:00.250 23:04:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.250 23:04:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.250 ************************************ 00:09:00.250 START TEST raid_state_function_test 00:09:00.250 ************************************ 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:00.250 Process raid pid: 78424 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78424 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78424' 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78424 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78424 ']' 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.250 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.251 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.251 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.251 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.511 [2024-11-18 23:04:19.684229] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:00.511 [2024-11-18 23:04:19.684457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.511 [2024-11-18 23:04:19.843861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.771 [2024-11-18 23:04:19.888701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.771 [2024-11-18 23:04:19.930843] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.771 [2024-11-18 23:04:19.930953] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.341 [2024-11-18 23:04:20.508486] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.341 [2024-11-18 23:04:20.508607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.341 [2024-11-18 23:04:20.508656] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.341 [2024-11-18 23:04:20.508679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.341 [2024-11-18 23:04:20.508697] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.341 [2024-11-18 23:04:20.508721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.341 "name": "Existed_Raid", 00:09:01.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.341 "strip_size_kb": 0, 00:09:01.341 "state": "configuring", 00:09:01.341 "raid_level": "raid1", 00:09:01.341 "superblock": false, 00:09:01.341 "num_base_bdevs": 3, 00:09:01.341 "num_base_bdevs_discovered": 0, 00:09:01.341 "num_base_bdevs_operational": 3, 00:09:01.341 "base_bdevs_list": [ 00:09:01.341 { 00:09:01.341 "name": "BaseBdev1", 00:09:01.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.341 "is_configured": false, 00:09:01.341 "data_offset": 0, 00:09:01.341 "data_size": 0 00:09:01.341 }, 00:09:01.341 { 00:09:01.341 "name": "BaseBdev2", 00:09:01.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.341 "is_configured": false, 00:09:01.341 "data_offset": 0, 00:09:01.341 "data_size": 0 00:09:01.341 }, 00:09:01.341 { 00:09:01.341 "name": "BaseBdev3", 00:09:01.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.341 "is_configured": false, 00:09:01.341 "data_offset": 0, 00:09:01.341 "data_size": 0 00:09:01.341 } 00:09:01.341 ] 00:09:01.341 }' 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.341 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.601 [2024-11-18 23:04:20.923696] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.601 [2024-11-18 23:04:20.923776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.601 [2024-11-18 23:04:20.935700] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.601 [2024-11-18 23:04:20.935773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.601 [2024-11-18 23:04:20.935802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.601 [2024-11-18 23:04:20.935824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.601 [2024-11-18 23:04:20.935841] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.601 [2024-11-18 23:04:20.935861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.601 [2024-11-18 23:04:20.956552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.601 BaseBdev1 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.601 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.861 [ 00:09:01.861 { 00:09:01.861 "name": "BaseBdev1", 00:09:01.861 "aliases": [ 00:09:01.861 "5cd8701f-57bf-41ee-863e-9d9e34107bcc" 00:09:01.861 ], 00:09:01.861 "product_name": "Malloc disk", 00:09:01.861 "block_size": 512, 00:09:01.861 "num_blocks": 65536, 00:09:01.861 "uuid": "5cd8701f-57bf-41ee-863e-9d9e34107bcc", 00:09:01.861 "assigned_rate_limits": { 00:09:01.861 "rw_ios_per_sec": 0, 00:09:01.861 "rw_mbytes_per_sec": 0, 00:09:01.861 "r_mbytes_per_sec": 0, 00:09:01.861 "w_mbytes_per_sec": 0 00:09:01.861 }, 00:09:01.861 "claimed": true, 00:09:01.861 "claim_type": "exclusive_write", 00:09:01.861 "zoned": false, 00:09:01.861 "supported_io_types": { 00:09:01.861 "read": true, 00:09:01.861 "write": true, 00:09:01.861 "unmap": true, 00:09:01.861 "flush": true, 00:09:01.861 "reset": true, 00:09:01.861 "nvme_admin": false, 00:09:01.861 "nvme_io": false, 00:09:01.861 "nvme_io_md": false, 00:09:01.861 "write_zeroes": true, 00:09:01.861 "zcopy": true, 00:09:01.861 "get_zone_info": false, 00:09:01.861 "zone_management": false, 00:09:01.861 "zone_append": false, 00:09:01.861 "compare": false, 00:09:01.861 "compare_and_write": false, 00:09:01.861 "abort": true, 00:09:01.861 "seek_hole": false, 00:09:01.861 "seek_data": false, 00:09:01.861 "copy": true, 00:09:01.861 "nvme_iov_md": false 00:09:01.861 }, 00:09:01.861 "memory_domains": [ 00:09:01.861 { 00:09:01.861 "dma_device_id": "system", 00:09:01.861 "dma_device_type": 1 00:09:01.861 }, 00:09:01.861 { 00:09:01.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.861 "dma_device_type": 2 00:09:01.861 } 00:09:01.861 ], 00:09:01.861 "driver_specific": {} 00:09:01.861 } 00:09:01.861 ] 00:09:01.861 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.861 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.861 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.861 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.861 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.861 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.861 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.861 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.862 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.862 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.862 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.862 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.862 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.862 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.862 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.862 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.862 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.862 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.862 "name": "Existed_Raid", 00:09:01.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.862 "strip_size_kb": 0, 00:09:01.862 "state": "configuring", 00:09:01.862 "raid_level": "raid1", 00:09:01.862 "superblock": false, 00:09:01.862 "num_base_bdevs": 3, 00:09:01.862 "num_base_bdevs_discovered": 1, 00:09:01.862 "num_base_bdevs_operational": 3, 00:09:01.862 "base_bdevs_list": [ 00:09:01.862 { 00:09:01.862 "name": "BaseBdev1", 00:09:01.862 "uuid": "5cd8701f-57bf-41ee-863e-9d9e34107bcc", 00:09:01.862 "is_configured": true, 00:09:01.862 "data_offset": 0, 00:09:01.862 "data_size": 65536 00:09:01.862 }, 00:09:01.862 { 00:09:01.862 "name": "BaseBdev2", 00:09:01.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.862 "is_configured": false, 00:09:01.862 "data_offset": 0, 00:09:01.862 "data_size": 0 00:09:01.862 }, 00:09:01.862 { 00:09:01.862 "name": "BaseBdev3", 00:09:01.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.862 "is_configured": false, 00:09:01.862 "data_offset": 0, 00:09:01.862 "data_size": 0 00:09:01.862 } 00:09:01.862 ] 00:09:01.862 }' 00:09:01.862 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.862 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.121 [2024-11-18 23:04:21.363863] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.121 [2024-11-18 23:04:21.363947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.121 [2024-11-18 23:04:21.371897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.121 [2024-11-18 23:04:21.373648] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.121 [2024-11-18 23:04:21.373689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.121 [2024-11-18 23:04:21.373699] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:02.121 [2024-11-18 23:04:21.373709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.121 "name": "Existed_Raid", 00:09:02.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.121 "strip_size_kb": 0, 00:09:02.121 "state": "configuring", 00:09:02.121 "raid_level": "raid1", 00:09:02.121 "superblock": false, 00:09:02.121 "num_base_bdevs": 3, 00:09:02.121 "num_base_bdevs_discovered": 1, 00:09:02.121 "num_base_bdevs_operational": 3, 00:09:02.121 "base_bdevs_list": [ 00:09:02.121 { 00:09:02.121 "name": "BaseBdev1", 00:09:02.121 "uuid": "5cd8701f-57bf-41ee-863e-9d9e34107bcc", 00:09:02.121 "is_configured": true, 00:09:02.121 "data_offset": 0, 00:09:02.121 "data_size": 65536 00:09:02.121 }, 00:09:02.121 { 00:09:02.121 "name": "BaseBdev2", 00:09:02.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.121 "is_configured": false, 00:09:02.121 "data_offset": 0, 00:09:02.121 "data_size": 0 00:09:02.121 }, 00:09:02.121 { 00:09:02.121 "name": "BaseBdev3", 00:09:02.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.121 "is_configured": false, 00:09:02.121 "data_offset": 0, 00:09:02.121 "data_size": 0 00:09:02.121 } 00:09:02.121 ] 00:09:02.121 }' 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.121 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.691 [2024-11-18 23:04:21.847964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.691 BaseBdev2 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.691 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.691 [ 00:09:02.691 { 00:09:02.691 "name": "BaseBdev2", 00:09:02.691 "aliases": [ 00:09:02.691 "13e5408c-7d7c-4b16-8403-2205fa5b5958" 00:09:02.691 ], 00:09:02.691 "product_name": "Malloc disk", 00:09:02.691 "block_size": 512, 00:09:02.691 "num_blocks": 65536, 00:09:02.691 "uuid": "13e5408c-7d7c-4b16-8403-2205fa5b5958", 00:09:02.691 "assigned_rate_limits": { 00:09:02.691 "rw_ios_per_sec": 0, 00:09:02.691 "rw_mbytes_per_sec": 0, 00:09:02.691 "r_mbytes_per_sec": 0, 00:09:02.691 "w_mbytes_per_sec": 0 00:09:02.691 }, 00:09:02.691 "claimed": true, 00:09:02.691 "claim_type": "exclusive_write", 00:09:02.691 "zoned": false, 00:09:02.691 "supported_io_types": { 00:09:02.691 "read": true, 00:09:02.691 "write": true, 00:09:02.691 "unmap": true, 00:09:02.691 "flush": true, 00:09:02.691 "reset": true, 00:09:02.691 "nvme_admin": false, 00:09:02.691 "nvme_io": false, 00:09:02.691 "nvme_io_md": false, 00:09:02.691 "write_zeroes": true, 00:09:02.691 "zcopy": true, 00:09:02.691 "get_zone_info": false, 00:09:02.691 "zone_management": false, 00:09:02.691 "zone_append": false, 00:09:02.691 "compare": false, 00:09:02.691 "compare_and_write": false, 00:09:02.691 "abort": true, 00:09:02.691 "seek_hole": false, 00:09:02.691 "seek_data": false, 00:09:02.691 "copy": true, 00:09:02.691 "nvme_iov_md": false 00:09:02.691 }, 00:09:02.691 "memory_domains": [ 00:09:02.692 { 00:09:02.692 "dma_device_id": "system", 00:09:02.692 "dma_device_type": 1 00:09:02.692 }, 00:09:02.692 { 00:09:02.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.692 "dma_device_type": 2 00:09:02.692 } 00:09:02.692 ], 00:09:02.692 "driver_specific": {} 00:09:02.692 } 00:09:02.692 ] 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.692 "name": "Existed_Raid", 00:09:02.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.692 "strip_size_kb": 0, 00:09:02.692 "state": "configuring", 00:09:02.692 "raid_level": "raid1", 00:09:02.692 "superblock": false, 00:09:02.692 "num_base_bdevs": 3, 00:09:02.692 "num_base_bdevs_discovered": 2, 00:09:02.692 "num_base_bdevs_operational": 3, 00:09:02.692 "base_bdevs_list": [ 00:09:02.692 { 00:09:02.692 "name": "BaseBdev1", 00:09:02.692 "uuid": "5cd8701f-57bf-41ee-863e-9d9e34107bcc", 00:09:02.692 "is_configured": true, 00:09:02.692 "data_offset": 0, 00:09:02.692 "data_size": 65536 00:09:02.692 }, 00:09:02.692 { 00:09:02.692 "name": "BaseBdev2", 00:09:02.692 "uuid": "13e5408c-7d7c-4b16-8403-2205fa5b5958", 00:09:02.692 "is_configured": true, 00:09:02.692 "data_offset": 0, 00:09:02.692 "data_size": 65536 00:09:02.692 }, 00:09:02.692 { 00:09:02.692 "name": "BaseBdev3", 00:09:02.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.692 "is_configured": false, 00:09:02.692 "data_offset": 0, 00:09:02.692 "data_size": 0 00:09:02.692 } 00:09:02.692 ] 00:09:02.692 }' 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.692 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.262 BaseBdev3 00:09:03.262 [2024-11-18 23:04:22.366064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.262 [2024-11-18 23:04:22.366106] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:03.262 [2024-11-18 23:04:22.366115] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:03.262 [2024-11-18 23:04:22.366429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:03.262 [2024-11-18 23:04:22.366577] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:03.262 [2024-11-18 23:04:22.366593] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:03.262 [2024-11-18 23:04:22.366792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.262 [ 00:09:03.262 { 00:09:03.262 "name": "BaseBdev3", 00:09:03.262 "aliases": [ 00:09:03.262 "8acc9b10-5b35-4553-bb0c-b909262b8273" 00:09:03.262 ], 00:09:03.262 "product_name": "Malloc disk", 00:09:03.262 "block_size": 512, 00:09:03.262 "num_blocks": 65536, 00:09:03.262 "uuid": "8acc9b10-5b35-4553-bb0c-b909262b8273", 00:09:03.262 "assigned_rate_limits": { 00:09:03.262 "rw_ios_per_sec": 0, 00:09:03.262 "rw_mbytes_per_sec": 0, 00:09:03.262 "r_mbytes_per_sec": 0, 00:09:03.262 "w_mbytes_per_sec": 0 00:09:03.262 }, 00:09:03.262 "claimed": true, 00:09:03.262 "claim_type": "exclusive_write", 00:09:03.262 "zoned": false, 00:09:03.262 "supported_io_types": { 00:09:03.262 "read": true, 00:09:03.262 "write": true, 00:09:03.262 "unmap": true, 00:09:03.262 "flush": true, 00:09:03.262 "reset": true, 00:09:03.262 "nvme_admin": false, 00:09:03.262 "nvme_io": false, 00:09:03.262 "nvme_io_md": false, 00:09:03.262 "write_zeroes": true, 00:09:03.262 "zcopy": true, 00:09:03.262 "get_zone_info": false, 00:09:03.262 "zone_management": false, 00:09:03.262 "zone_append": false, 00:09:03.262 "compare": false, 00:09:03.262 "compare_and_write": false, 00:09:03.262 "abort": true, 00:09:03.262 "seek_hole": false, 00:09:03.262 "seek_data": false, 00:09:03.262 "copy": true, 00:09:03.262 "nvme_iov_md": false 00:09:03.262 }, 00:09:03.262 "memory_domains": [ 00:09:03.262 { 00:09:03.262 "dma_device_id": "system", 00:09:03.262 "dma_device_type": 1 00:09:03.262 }, 00:09:03.262 { 00:09:03.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.262 "dma_device_type": 2 00:09:03.262 } 00:09:03.262 ], 00:09:03.262 "driver_specific": {} 00:09:03.262 } 00:09:03.262 ] 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.262 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.262 "name": "Existed_Raid", 00:09:03.262 "uuid": "e1ca5ba2-64c8-46b8-a1a3-3b0f69ef227c", 00:09:03.262 "strip_size_kb": 0, 00:09:03.262 "state": "online", 00:09:03.262 "raid_level": "raid1", 00:09:03.262 "superblock": false, 00:09:03.262 "num_base_bdevs": 3, 00:09:03.262 "num_base_bdevs_discovered": 3, 00:09:03.262 "num_base_bdevs_operational": 3, 00:09:03.262 "base_bdevs_list": [ 00:09:03.262 { 00:09:03.262 "name": "BaseBdev1", 00:09:03.262 "uuid": "5cd8701f-57bf-41ee-863e-9d9e34107bcc", 00:09:03.262 "is_configured": true, 00:09:03.262 "data_offset": 0, 00:09:03.262 "data_size": 65536 00:09:03.262 }, 00:09:03.262 { 00:09:03.262 "name": "BaseBdev2", 00:09:03.262 "uuid": "13e5408c-7d7c-4b16-8403-2205fa5b5958", 00:09:03.262 "is_configured": true, 00:09:03.262 "data_offset": 0, 00:09:03.262 "data_size": 65536 00:09:03.262 }, 00:09:03.262 { 00:09:03.262 "name": "BaseBdev3", 00:09:03.262 "uuid": "8acc9b10-5b35-4553-bb0c-b909262b8273", 00:09:03.262 "is_configured": true, 00:09:03.262 "data_offset": 0, 00:09:03.262 "data_size": 65536 00:09:03.262 } 00:09:03.262 ] 00:09:03.263 }' 00:09:03.263 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.263 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.523 [2024-11-18 23:04:22.809605] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.523 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.523 "name": "Existed_Raid", 00:09:03.523 "aliases": [ 00:09:03.523 "e1ca5ba2-64c8-46b8-a1a3-3b0f69ef227c" 00:09:03.523 ], 00:09:03.523 "product_name": "Raid Volume", 00:09:03.523 "block_size": 512, 00:09:03.523 "num_blocks": 65536, 00:09:03.523 "uuid": "e1ca5ba2-64c8-46b8-a1a3-3b0f69ef227c", 00:09:03.523 "assigned_rate_limits": { 00:09:03.523 "rw_ios_per_sec": 0, 00:09:03.523 "rw_mbytes_per_sec": 0, 00:09:03.523 "r_mbytes_per_sec": 0, 00:09:03.523 "w_mbytes_per_sec": 0 00:09:03.523 }, 00:09:03.523 "claimed": false, 00:09:03.523 "zoned": false, 00:09:03.523 "supported_io_types": { 00:09:03.523 "read": true, 00:09:03.523 "write": true, 00:09:03.523 "unmap": false, 00:09:03.523 "flush": false, 00:09:03.523 "reset": true, 00:09:03.523 "nvme_admin": false, 00:09:03.523 "nvme_io": false, 00:09:03.523 "nvme_io_md": false, 00:09:03.523 "write_zeroes": true, 00:09:03.523 "zcopy": false, 00:09:03.523 "get_zone_info": false, 00:09:03.523 "zone_management": false, 00:09:03.523 "zone_append": false, 00:09:03.523 "compare": false, 00:09:03.523 "compare_and_write": false, 00:09:03.523 "abort": false, 00:09:03.523 "seek_hole": false, 00:09:03.523 "seek_data": false, 00:09:03.523 "copy": false, 00:09:03.523 "nvme_iov_md": false 00:09:03.523 }, 00:09:03.523 "memory_domains": [ 00:09:03.523 { 00:09:03.523 "dma_device_id": "system", 00:09:03.523 "dma_device_type": 1 00:09:03.523 }, 00:09:03.523 { 00:09:03.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.523 "dma_device_type": 2 00:09:03.523 }, 00:09:03.523 { 00:09:03.523 "dma_device_id": "system", 00:09:03.523 "dma_device_type": 1 00:09:03.523 }, 00:09:03.523 { 00:09:03.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.524 "dma_device_type": 2 00:09:03.524 }, 00:09:03.524 { 00:09:03.524 "dma_device_id": "system", 00:09:03.524 "dma_device_type": 1 00:09:03.524 }, 00:09:03.524 { 00:09:03.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.524 "dma_device_type": 2 00:09:03.524 } 00:09:03.524 ], 00:09:03.524 "driver_specific": { 00:09:03.524 "raid": { 00:09:03.524 "uuid": "e1ca5ba2-64c8-46b8-a1a3-3b0f69ef227c", 00:09:03.524 "strip_size_kb": 0, 00:09:03.524 "state": "online", 00:09:03.524 "raid_level": "raid1", 00:09:03.524 "superblock": false, 00:09:03.524 "num_base_bdevs": 3, 00:09:03.524 "num_base_bdevs_discovered": 3, 00:09:03.524 "num_base_bdevs_operational": 3, 00:09:03.524 "base_bdevs_list": [ 00:09:03.524 { 00:09:03.524 "name": "BaseBdev1", 00:09:03.524 "uuid": "5cd8701f-57bf-41ee-863e-9d9e34107bcc", 00:09:03.524 "is_configured": true, 00:09:03.524 "data_offset": 0, 00:09:03.524 "data_size": 65536 00:09:03.524 }, 00:09:03.524 { 00:09:03.524 "name": "BaseBdev2", 00:09:03.524 "uuid": "13e5408c-7d7c-4b16-8403-2205fa5b5958", 00:09:03.524 "is_configured": true, 00:09:03.524 "data_offset": 0, 00:09:03.524 "data_size": 65536 00:09:03.524 }, 00:09:03.524 { 00:09:03.524 "name": "BaseBdev3", 00:09:03.524 "uuid": "8acc9b10-5b35-4553-bb0c-b909262b8273", 00:09:03.524 "is_configured": true, 00:09:03.524 "data_offset": 0, 00:09:03.524 "data_size": 65536 00:09:03.524 } 00:09:03.524 ] 00:09:03.524 } 00:09:03.524 } 00:09:03.524 }' 00:09:03.524 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.524 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:03.524 BaseBdev2 00:09:03.524 BaseBdev3' 00:09:03.524 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.787 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.787 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.787 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:03.787 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.787 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.787 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.787 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.787 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.787 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.787 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.787 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.787 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.787 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.787 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.787 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.787 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.787 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.787 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.787 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:03.787 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.787 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.787 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.788 [2024-11-18 23:04:23.104881] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.788 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.049 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.049 "name": "Existed_Raid", 00:09:04.049 "uuid": "e1ca5ba2-64c8-46b8-a1a3-3b0f69ef227c", 00:09:04.049 "strip_size_kb": 0, 00:09:04.049 "state": "online", 00:09:04.049 "raid_level": "raid1", 00:09:04.049 "superblock": false, 00:09:04.049 "num_base_bdevs": 3, 00:09:04.049 "num_base_bdevs_discovered": 2, 00:09:04.049 "num_base_bdevs_operational": 2, 00:09:04.049 "base_bdevs_list": [ 00:09:04.049 { 00:09:04.049 "name": null, 00:09:04.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.049 "is_configured": false, 00:09:04.049 "data_offset": 0, 00:09:04.049 "data_size": 65536 00:09:04.049 }, 00:09:04.049 { 00:09:04.049 "name": "BaseBdev2", 00:09:04.049 "uuid": "13e5408c-7d7c-4b16-8403-2205fa5b5958", 00:09:04.049 "is_configured": true, 00:09:04.049 "data_offset": 0, 00:09:04.049 "data_size": 65536 00:09:04.049 }, 00:09:04.049 { 00:09:04.049 "name": "BaseBdev3", 00:09:04.049 "uuid": "8acc9b10-5b35-4553-bb0c-b909262b8273", 00:09:04.049 "is_configured": true, 00:09:04.049 "data_offset": 0, 00:09:04.049 "data_size": 65536 00:09:04.049 } 00:09:04.049 ] 00:09:04.049 }' 00:09:04.049 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.049 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.310 [2024-11-18 23:04:23.579333] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.310 [2024-11-18 23:04:23.646374] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:04.310 [2024-11-18 23:04:23.646499] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.310 [2024-11-18 23:04:23.657869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.310 [2024-11-18 23:04:23.657990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.310 [2024-11-18 23:04:23.658010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.310 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.583 BaseBdev2 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.583 [ 00:09:04.583 { 00:09:04.583 "name": "BaseBdev2", 00:09:04.583 "aliases": [ 00:09:04.583 "0fe85014-f7dc-40e6-9d91-c41d94fbf786" 00:09:04.583 ], 00:09:04.583 "product_name": "Malloc disk", 00:09:04.583 "block_size": 512, 00:09:04.583 "num_blocks": 65536, 00:09:04.583 "uuid": "0fe85014-f7dc-40e6-9d91-c41d94fbf786", 00:09:04.583 "assigned_rate_limits": { 00:09:04.583 "rw_ios_per_sec": 0, 00:09:04.583 "rw_mbytes_per_sec": 0, 00:09:04.583 "r_mbytes_per_sec": 0, 00:09:04.583 "w_mbytes_per_sec": 0 00:09:04.583 }, 00:09:04.583 "claimed": false, 00:09:04.583 "zoned": false, 00:09:04.583 "supported_io_types": { 00:09:04.583 "read": true, 00:09:04.583 "write": true, 00:09:04.583 "unmap": true, 00:09:04.583 "flush": true, 00:09:04.583 "reset": true, 00:09:04.583 "nvme_admin": false, 00:09:04.583 "nvme_io": false, 00:09:04.583 "nvme_io_md": false, 00:09:04.583 "write_zeroes": true, 00:09:04.583 "zcopy": true, 00:09:04.583 "get_zone_info": false, 00:09:04.583 "zone_management": false, 00:09:04.583 "zone_append": false, 00:09:04.583 "compare": false, 00:09:04.583 "compare_and_write": false, 00:09:04.583 "abort": true, 00:09:04.583 "seek_hole": false, 00:09:04.583 "seek_data": false, 00:09:04.583 "copy": true, 00:09:04.583 "nvme_iov_md": false 00:09:04.583 }, 00:09:04.583 "memory_domains": [ 00:09:04.583 { 00:09:04.583 "dma_device_id": "system", 00:09:04.583 "dma_device_type": 1 00:09:04.583 }, 00:09:04.583 { 00:09:04.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.583 "dma_device_type": 2 00:09:04.583 } 00:09:04.583 ], 00:09:04.583 "driver_specific": {} 00:09:04.583 } 00:09:04.583 ] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.583 BaseBdev3 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.583 [ 00:09:04.583 { 00:09:04.583 "name": "BaseBdev3", 00:09:04.583 "aliases": [ 00:09:04.583 "79e01226-ad03-4ec7-8d24-18a38b824ce2" 00:09:04.583 ], 00:09:04.583 "product_name": "Malloc disk", 00:09:04.583 "block_size": 512, 00:09:04.583 "num_blocks": 65536, 00:09:04.583 "uuid": "79e01226-ad03-4ec7-8d24-18a38b824ce2", 00:09:04.583 "assigned_rate_limits": { 00:09:04.583 "rw_ios_per_sec": 0, 00:09:04.583 "rw_mbytes_per_sec": 0, 00:09:04.583 "r_mbytes_per_sec": 0, 00:09:04.583 "w_mbytes_per_sec": 0 00:09:04.583 }, 00:09:04.583 "claimed": false, 00:09:04.583 "zoned": false, 00:09:04.583 "supported_io_types": { 00:09:04.583 "read": true, 00:09:04.583 "write": true, 00:09:04.583 "unmap": true, 00:09:04.583 "flush": true, 00:09:04.583 "reset": true, 00:09:04.583 "nvme_admin": false, 00:09:04.583 "nvme_io": false, 00:09:04.583 "nvme_io_md": false, 00:09:04.583 "write_zeroes": true, 00:09:04.583 "zcopy": true, 00:09:04.583 "get_zone_info": false, 00:09:04.583 "zone_management": false, 00:09:04.583 "zone_append": false, 00:09:04.583 "compare": false, 00:09:04.583 "compare_and_write": false, 00:09:04.583 "abort": true, 00:09:04.583 "seek_hole": false, 00:09:04.583 "seek_data": false, 00:09:04.583 "copy": true, 00:09:04.583 "nvme_iov_md": false 00:09:04.583 }, 00:09:04.583 "memory_domains": [ 00:09:04.583 { 00:09:04.583 "dma_device_id": "system", 00:09:04.583 "dma_device_type": 1 00:09:04.583 }, 00:09:04.583 { 00:09:04.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.583 "dma_device_type": 2 00:09:04.583 } 00:09:04.583 ], 00:09:04.583 "driver_specific": {} 00:09:04.583 } 00:09:04.583 ] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.583 [2024-11-18 23:04:23.821630] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.583 [2024-11-18 23:04:23.821729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.583 [2024-11-18 23:04:23.821767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.583 [2024-11-18 23:04:23.823535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.583 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.584 "name": "Existed_Raid", 00:09:04.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.584 "strip_size_kb": 0, 00:09:04.584 "state": "configuring", 00:09:04.584 "raid_level": "raid1", 00:09:04.584 "superblock": false, 00:09:04.584 "num_base_bdevs": 3, 00:09:04.584 "num_base_bdevs_discovered": 2, 00:09:04.584 "num_base_bdevs_operational": 3, 00:09:04.584 "base_bdevs_list": [ 00:09:04.584 { 00:09:04.584 "name": "BaseBdev1", 00:09:04.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.584 "is_configured": false, 00:09:04.584 "data_offset": 0, 00:09:04.584 "data_size": 0 00:09:04.584 }, 00:09:04.584 { 00:09:04.584 "name": "BaseBdev2", 00:09:04.584 "uuid": "0fe85014-f7dc-40e6-9d91-c41d94fbf786", 00:09:04.584 "is_configured": true, 00:09:04.584 "data_offset": 0, 00:09:04.584 "data_size": 65536 00:09:04.584 }, 00:09:04.584 { 00:09:04.584 "name": "BaseBdev3", 00:09:04.584 "uuid": "79e01226-ad03-4ec7-8d24-18a38b824ce2", 00:09:04.584 "is_configured": true, 00:09:04.584 "data_offset": 0, 00:09:04.584 "data_size": 65536 00:09:04.584 } 00:09:04.584 ] 00:09:04.584 }' 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.584 23:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.846 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:04.846 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.846 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.846 [2024-11-18 23:04:24.216948] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.846 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.846 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.846 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.846 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.106 "name": "Existed_Raid", 00:09:05.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.106 "strip_size_kb": 0, 00:09:05.106 "state": "configuring", 00:09:05.106 "raid_level": "raid1", 00:09:05.106 "superblock": false, 00:09:05.106 "num_base_bdevs": 3, 00:09:05.106 "num_base_bdevs_discovered": 1, 00:09:05.106 "num_base_bdevs_operational": 3, 00:09:05.106 "base_bdevs_list": [ 00:09:05.106 { 00:09:05.106 "name": "BaseBdev1", 00:09:05.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.106 "is_configured": false, 00:09:05.106 "data_offset": 0, 00:09:05.106 "data_size": 0 00:09:05.106 }, 00:09:05.106 { 00:09:05.106 "name": null, 00:09:05.106 "uuid": "0fe85014-f7dc-40e6-9d91-c41d94fbf786", 00:09:05.106 "is_configured": false, 00:09:05.106 "data_offset": 0, 00:09:05.106 "data_size": 65536 00:09:05.106 }, 00:09:05.106 { 00:09:05.106 "name": "BaseBdev3", 00:09:05.106 "uuid": "79e01226-ad03-4ec7-8d24-18a38b824ce2", 00:09:05.106 "is_configured": true, 00:09:05.106 "data_offset": 0, 00:09:05.106 "data_size": 65536 00:09:05.106 } 00:09:05.106 ] 00:09:05.106 }' 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.106 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.369 [2024-11-18 23:04:24.703082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.369 BaseBdev1 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.369 [ 00:09:05.369 { 00:09:05.369 "name": "BaseBdev1", 00:09:05.369 "aliases": [ 00:09:05.369 "c3bd488d-0970-421d-beab-7f8779b8eed0" 00:09:05.369 ], 00:09:05.369 "product_name": "Malloc disk", 00:09:05.369 "block_size": 512, 00:09:05.369 "num_blocks": 65536, 00:09:05.369 "uuid": "c3bd488d-0970-421d-beab-7f8779b8eed0", 00:09:05.369 "assigned_rate_limits": { 00:09:05.369 "rw_ios_per_sec": 0, 00:09:05.369 "rw_mbytes_per_sec": 0, 00:09:05.369 "r_mbytes_per_sec": 0, 00:09:05.369 "w_mbytes_per_sec": 0 00:09:05.369 }, 00:09:05.369 "claimed": true, 00:09:05.369 "claim_type": "exclusive_write", 00:09:05.369 "zoned": false, 00:09:05.369 "supported_io_types": { 00:09:05.369 "read": true, 00:09:05.369 "write": true, 00:09:05.369 "unmap": true, 00:09:05.369 "flush": true, 00:09:05.369 "reset": true, 00:09:05.369 "nvme_admin": false, 00:09:05.369 "nvme_io": false, 00:09:05.369 "nvme_io_md": false, 00:09:05.369 "write_zeroes": true, 00:09:05.369 "zcopy": true, 00:09:05.369 "get_zone_info": false, 00:09:05.369 "zone_management": false, 00:09:05.369 "zone_append": false, 00:09:05.369 "compare": false, 00:09:05.369 "compare_and_write": false, 00:09:05.369 "abort": true, 00:09:05.369 "seek_hole": false, 00:09:05.369 "seek_data": false, 00:09:05.369 "copy": true, 00:09:05.369 "nvme_iov_md": false 00:09:05.369 }, 00:09:05.369 "memory_domains": [ 00:09:05.369 { 00:09:05.369 "dma_device_id": "system", 00:09:05.369 "dma_device_type": 1 00:09:05.369 }, 00:09:05.369 { 00:09:05.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.369 "dma_device_type": 2 00:09:05.369 } 00:09:05.369 ], 00:09:05.369 "driver_specific": {} 00:09:05.369 } 00:09:05.369 ] 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.369 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.628 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.628 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.628 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.628 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.628 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.628 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.628 "name": "Existed_Raid", 00:09:05.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.628 "strip_size_kb": 0, 00:09:05.628 "state": "configuring", 00:09:05.628 "raid_level": "raid1", 00:09:05.628 "superblock": false, 00:09:05.628 "num_base_bdevs": 3, 00:09:05.628 "num_base_bdevs_discovered": 2, 00:09:05.628 "num_base_bdevs_operational": 3, 00:09:05.628 "base_bdevs_list": [ 00:09:05.628 { 00:09:05.628 "name": "BaseBdev1", 00:09:05.628 "uuid": "c3bd488d-0970-421d-beab-7f8779b8eed0", 00:09:05.628 "is_configured": true, 00:09:05.629 "data_offset": 0, 00:09:05.629 "data_size": 65536 00:09:05.629 }, 00:09:05.629 { 00:09:05.629 "name": null, 00:09:05.629 "uuid": "0fe85014-f7dc-40e6-9d91-c41d94fbf786", 00:09:05.629 "is_configured": false, 00:09:05.629 "data_offset": 0, 00:09:05.629 "data_size": 65536 00:09:05.629 }, 00:09:05.629 { 00:09:05.629 "name": "BaseBdev3", 00:09:05.629 "uuid": "79e01226-ad03-4ec7-8d24-18a38b824ce2", 00:09:05.629 "is_configured": true, 00:09:05.629 "data_offset": 0, 00:09:05.629 "data_size": 65536 00:09:05.629 } 00:09:05.629 ] 00:09:05.629 }' 00:09:05.629 23:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.629 23:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.889 [2024-11-18 23:04:25.178312] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.889 "name": "Existed_Raid", 00:09:05.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.889 "strip_size_kb": 0, 00:09:05.889 "state": "configuring", 00:09:05.889 "raid_level": "raid1", 00:09:05.889 "superblock": false, 00:09:05.889 "num_base_bdevs": 3, 00:09:05.889 "num_base_bdevs_discovered": 1, 00:09:05.889 "num_base_bdevs_operational": 3, 00:09:05.889 "base_bdevs_list": [ 00:09:05.889 { 00:09:05.889 "name": "BaseBdev1", 00:09:05.889 "uuid": "c3bd488d-0970-421d-beab-7f8779b8eed0", 00:09:05.889 "is_configured": true, 00:09:05.889 "data_offset": 0, 00:09:05.889 "data_size": 65536 00:09:05.889 }, 00:09:05.889 { 00:09:05.889 "name": null, 00:09:05.889 "uuid": "0fe85014-f7dc-40e6-9d91-c41d94fbf786", 00:09:05.889 "is_configured": false, 00:09:05.889 "data_offset": 0, 00:09:05.889 "data_size": 65536 00:09:05.889 }, 00:09:05.889 { 00:09:05.889 "name": null, 00:09:05.889 "uuid": "79e01226-ad03-4ec7-8d24-18a38b824ce2", 00:09:05.889 "is_configured": false, 00:09:05.889 "data_offset": 0, 00:09:05.889 "data_size": 65536 00:09:05.889 } 00:09:05.889 ] 00:09:05.889 }' 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.889 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.459 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.459 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.459 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.459 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.459 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.459 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:06.459 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:06.459 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.459 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.460 [2024-11-18 23:04:25.637542] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.460 "name": "Existed_Raid", 00:09:06.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.460 "strip_size_kb": 0, 00:09:06.460 "state": "configuring", 00:09:06.460 "raid_level": "raid1", 00:09:06.460 "superblock": false, 00:09:06.460 "num_base_bdevs": 3, 00:09:06.460 "num_base_bdevs_discovered": 2, 00:09:06.460 "num_base_bdevs_operational": 3, 00:09:06.460 "base_bdevs_list": [ 00:09:06.460 { 00:09:06.460 "name": "BaseBdev1", 00:09:06.460 "uuid": "c3bd488d-0970-421d-beab-7f8779b8eed0", 00:09:06.460 "is_configured": true, 00:09:06.460 "data_offset": 0, 00:09:06.460 "data_size": 65536 00:09:06.460 }, 00:09:06.460 { 00:09:06.460 "name": null, 00:09:06.460 "uuid": "0fe85014-f7dc-40e6-9d91-c41d94fbf786", 00:09:06.460 "is_configured": false, 00:09:06.460 "data_offset": 0, 00:09:06.460 "data_size": 65536 00:09:06.460 }, 00:09:06.460 { 00:09:06.460 "name": "BaseBdev3", 00:09:06.460 "uuid": "79e01226-ad03-4ec7-8d24-18a38b824ce2", 00:09:06.460 "is_configured": true, 00:09:06.460 "data_offset": 0, 00:09:06.460 "data_size": 65536 00:09:06.460 } 00:09:06.460 ] 00:09:06.460 }' 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.460 23:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.731 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.732 [2024-11-18 23:04:26.080760] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.732 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.990 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.990 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.990 "name": "Existed_Raid", 00:09:06.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.990 "strip_size_kb": 0, 00:09:06.990 "state": "configuring", 00:09:06.990 "raid_level": "raid1", 00:09:06.990 "superblock": false, 00:09:06.990 "num_base_bdevs": 3, 00:09:06.990 "num_base_bdevs_discovered": 1, 00:09:06.990 "num_base_bdevs_operational": 3, 00:09:06.990 "base_bdevs_list": [ 00:09:06.990 { 00:09:06.990 "name": null, 00:09:06.990 "uuid": "c3bd488d-0970-421d-beab-7f8779b8eed0", 00:09:06.990 "is_configured": false, 00:09:06.990 "data_offset": 0, 00:09:06.990 "data_size": 65536 00:09:06.990 }, 00:09:06.990 { 00:09:06.990 "name": null, 00:09:06.990 "uuid": "0fe85014-f7dc-40e6-9d91-c41d94fbf786", 00:09:06.990 "is_configured": false, 00:09:06.990 "data_offset": 0, 00:09:06.990 "data_size": 65536 00:09:06.990 }, 00:09:06.990 { 00:09:06.990 "name": "BaseBdev3", 00:09:06.990 "uuid": "79e01226-ad03-4ec7-8d24-18a38b824ce2", 00:09:06.990 "is_configured": true, 00:09:06.990 "data_offset": 0, 00:09:06.990 "data_size": 65536 00:09:06.990 } 00:09:06.990 ] 00:09:06.990 }' 00:09:06.990 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.990 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.250 [2024-11-18 23:04:26.534408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.250 "name": "Existed_Raid", 00:09:07.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.250 "strip_size_kb": 0, 00:09:07.250 "state": "configuring", 00:09:07.250 "raid_level": "raid1", 00:09:07.250 "superblock": false, 00:09:07.250 "num_base_bdevs": 3, 00:09:07.250 "num_base_bdevs_discovered": 2, 00:09:07.250 "num_base_bdevs_operational": 3, 00:09:07.250 "base_bdevs_list": [ 00:09:07.250 { 00:09:07.250 "name": null, 00:09:07.250 "uuid": "c3bd488d-0970-421d-beab-7f8779b8eed0", 00:09:07.250 "is_configured": false, 00:09:07.250 "data_offset": 0, 00:09:07.250 "data_size": 65536 00:09:07.250 }, 00:09:07.250 { 00:09:07.250 "name": "BaseBdev2", 00:09:07.250 "uuid": "0fe85014-f7dc-40e6-9d91-c41d94fbf786", 00:09:07.250 "is_configured": true, 00:09:07.250 "data_offset": 0, 00:09:07.250 "data_size": 65536 00:09:07.250 }, 00:09:07.250 { 00:09:07.250 "name": "BaseBdev3", 00:09:07.250 "uuid": "79e01226-ad03-4ec7-8d24-18a38b824ce2", 00:09:07.250 "is_configured": true, 00:09:07.250 "data_offset": 0, 00:09:07.250 "data_size": 65536 00:09:07.250 } 00:09:07.250 ] 00:09:07.250 }' 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.250 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.821 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.821 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:07.821 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.821 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.821 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.821 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:07.821 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.821 23:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:07.821 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.821 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.821 23:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c3bd488d-0970-421d-beab-7f8779b8eed0 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.821 [2024-11-18 23:04:27.028472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:07.821 [2024-11-18 23:04:27.028571] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:07.821 [2024-11-18 23:04:27.028585] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:07.821 [2024-11-18 23:04:27.028844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:07.821 [2024-11-18 23:04:27.028979] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:07.821 [2024-11-18 23:04:27.028992] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:07.821 [2024-11-18 23:04:27.029165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.821 NewBaseBdev 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.821 [ 00:09:07.821 { 00:09:07.821 "name": "NewBaseBdev", 00:09:07.821 "aliases": [ 00:09:07.821 "c3bd488d-0970-421d-beab-7f8779b8eed0" 00:09:07.821 ], 00:09:07.821 "product_name": "Malloc disk", 00:09:07.821 "block_size": 512, 00:09:07.821 "num_blocks": 65536, 00:09:07.821 "uuid": "c3bd488d-0970-421d-beab-7f8779b8eed0", 00:09:07.821 "assigned_rate_limits": { 00:09:07.821 "rw_ios_per_sec": 0, 00:09:07.821 "rw_mbytes_per_sec": 0, 00:09:07.821 "r_mbytes_per_sec": 0, 00:09:07.821 "w_mbytes_per_sec": 0 00:09:07.821 }, 00:09:07.821 "claimed": true, 00:09:07.821 "claim_type": "exclusive_write", 00:09:07.821 "zoned": false, 00:09:07.821 "supported_io_types": { 00:09:07.821 "read": true, 00:09:07.821 "write": true, 00:09:07.821 "unmap": true, 00:09:07.821 "flush": true, 00:09:07.821 "reset": true, 00:09:07.821 "nvme_admin": false, 00:09:07.821 "nvme_io": false, 00:09:07.821 "nvme_io_md": false, 00:09:07.821 "write_zeroes": true, 00:09:07.821 "zcopy": true, 00:09:07.821 "get_zone_info": false, 00:09:07.821 "zone_management": false, 00:09:07.821 "zone_append": false, 00:09:07.821 "compare": false, 00:09:07.821 "compare_and_write": false, 00:09:07.821 "abort": true, 00:09:07.821 "seek_hole": false, 00:09:07.821 "seek_data": false, 00:09:07.821 "copy": true, 00:09:07.821 "nvme_iov_md": false 00:09:07.821 }, 00:09:07.821 "memory_domains": [ 00:09:07.821 { 00:09:07.821 "dma_device_id": "system", 00:09:07.821 "dma_device_type": 1 00:09:07.821 }, 00:09:07.821 { 00:09:07.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.821 "dma_device_type": 2 00:09:07.821 } 00:09:07.821 ], 00:09:07.821 "driver_specific": {} 00:09:07.821 } 00:09:07.821 ] 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.821 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.821 "name": "Existed_Raid", 00:09:07.821 "uuid": "7e6c2d54-dbbd-470e-95bb-7078e1f5ff35", 00:09:07.821 "strip_size_kb": 0, 00:09:07.821 "state": "online", 00:09:07.821 "raid_level": "raid1", 00:09:07.821 "superblock": false, 00:09:07.821 "num_base_bdevs": 3, 00:09:07.821 "num_base_bdevs_discovered": 3, 00:09:07.821 "num_base_bdevs_operational": 3, 00:09:07.821 "base_bdevs_list": [ 00:09:07.821 { 00:09:07.821 "name": "NewBaseBdev", 00:09:07.821 "uuid": "c3bd488d-0970-421d-beab-7f8779b8eed0", 00:09:07.821 "is_configured": true, 00:09:07.821 "data_offset": 0, 00:09:07.821 "data_size": 65536 00:09:07.821 }, 00:09:07.821 { 00:09:07.821 "name": "BaseBdev2", 00:09:07.821 "uuid": "0fe85014-f7dc-40e6-9d91-c41d94fbf786", 00:09:07.821 "is_configured": true, 00:09:07.821 "data_offset": 0, 00:09:07.821 "data_size": 65536 00:09:07.821 }, 00:09:07.821 { 00:09:07.821 "name": "BaseBdev3", 00:09:07.821 "uuid": "79e01226-ad03-4ec7-8d24-18a38b824ce2", 00:09:07.821 "is_configured": true, 00:09:07.822 "data_offset": 0, 00:09:07.822 "data_size": 65536 00:09:07.822 } 00:09:07.822 ] 00:09:07.822 }' 00:09:07.822 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.822 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.389 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.389 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.389 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.389 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.389 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.389 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.389 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.389 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.389 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.389 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.389 [2024-11-18 23:04:27.507973] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.389 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.390 "name": "Existed_Raid", 00:09:08.390 "aliases": [ 00:09:08.390 "7e6c2d54-dbbd-470e-95bb-7078e1f5ff35" 00:09:08.390 ], 00:09:08.390 "product_name": "Raid Volume", 00:09:08.390 "block_size": 512, 00:09:08.390 "num_blocks": 65536, 00:09:08.390 "uuid": "7e6c2d54-dbbd-470e-95bb-7078e1f5ff35", 00:09:08.390 "assigned_rate_limits": { 00:09:08.390 "rw_ios_per_sec": 0, 00:09:08.390 "rw_mbytes_per_sec": 0, 00:09:08.390 "r_mbytes_per_sec": 0, 00:09:08.390 "w_mbytes_per_sec": 0 00:09:08.390 }, 00:09:08.390 "claimed": false, 00:09:08.390 "zoned": false, 00:09:08.390 "supported_io_types": { 00:09:08.390 "read": true, 00:09:08.390 "write": true, 00:09:08.390 "unmap": false, 00:09:08.390 "flush": false, 00:09:08.390 "reset": true, 00:09:08.390 "nvme_admin": false, 00:09:08.390 "nvme_io": false, 00:09:08.390 "nvme_io_md": false, 00:09:08.390 "write_zeroes": true, 00:09:08.390 "zcopy": false, 00:09:08.390 "get_zone_info": false, 00:09:08.390 "zone_management": false, 00:09:08.390 "zone_append": false, 00:09:08.390 "compare": false, 00:09:08.390 "compare_and_write": false, 00:09:08.390 "abort": false, 00:09:08.390 "seek_hole": false, 00:09:08.390 "seek_data": false, 00:09:08.390 "copy": false, 00:09:08.390 "nvme_iov_md": false 00:09:08.390 }, 00:09:08.390 "memory_domains": [ 00:09:08.390 { 00:09:08.390 "dma_device_id": "system", 00:09:08.390 "dma_device_type": 1 00:09:08.390 }, 00:09:08.390 { 00:09:08.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.390 "dma_device_type": 2 00:09:08.390 }, 00:09:08.390 { 00:09:08.390 "dma_device_id": "system", 00:09:08.390 "dma_device_type": 1 00:09:08.390 }, 00:09:08.390 { 00:09:08.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.390 "dma_device_type": 2 00:09:08.390 }, 00:09:08.390 { 00:09:08.390 "dma_device_id": "system", 00:09:08.390 "dma_device_type": 1 00:09:08.390 }, 00:09:08.390 { 00:09:08.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.390 "dma_device_type": 2 00:09:08.390 } 00:09:08.390 ], 00:09:08.390 "driver_specific": { 00:09:08.390 "raid": { 00:09:08.390 "uuid": "7e6c2d54-dbbd-470e-95bb-7078e1f5ff35", 00:09:08.390 "strip_size_kb": 0, 00:09:08.390 "state": "online", 00:09:08.390 "raid_level": "raid1", 00:09:08.390 "superblock": false, 00:09:08.390 "num_base_bdevs": 3, 00:09:08.390 "num_base_bdevs_discovered": 3, 00:09:08.390 "num_base_bdevs_operational": 3, 00:09:08.390 "base_bdevs_list": [ 00:09:08.390 { 00:09:08.390 "name": "NewBaseBdev", 00:09:08.390 "uuid": "c3bd488d-0970-421d-beab-7f8779b8eed0", 00:09:08.390 "is_configured": true, 00:09:08.390 "data_offset": 0, 00:09:08.390 "data_size": 65536 00:09:08.390 }, 00:09:08.390 { 00:09:08.390 "name": "BaseBdev2", 00:09:08.390 "uuid": "0fe85014-f7dc-40e6-9d91-c41d94fbf786", 00:09:08.390 "is_configured": true, 00:09:08.390 "data_offset": 0, 00:09:08.390 "data_size": 65536 00:09:08.390 }, 00:09:08.390 { 00:09:08.390 "name": "BaseBdev3", 00:09:08.390 "uuid": "79e01226-ad03-4ec7-8d24-18a38b824ce2", 00:09:08.390 "is_configured": true, 00:09:08.390 "data_offset": 0, 00:09:08.390 "data_size": 65536 00:09:08.390 } 00:09:08.390 ] 00:09:08.390 } 00:09:08.390 } 00:09:08.390 }' 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:08.390 BaseBdev2 00:09:08.390 BaseBdev3' 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.390 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.652 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.652 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.652 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.653 [2024-11-18 23:04:27.807240] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.653 [2024-11-18 23:04:27.807334] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.653 [2024-11-18 23:04:27.807401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.653 [2024-11-18 23:04:27.807646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.653 [2024-11-18 23:04:27.807655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78424 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78424 ']' 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78424 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78424 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78424' 00:09:08.653 killing process with pid 78424 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78424 00:09:08.653 [2024-11-18 23:04:27.841043] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.653 23:04:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78424 00:09:08.653 [2024-11-18 23:04:27.871265] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.925 ************************************ 00:09:08.925 END TEST raid_state_function_test 00:09:08.925 ************************************ 00:09:08.925 23:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:08.925 00:09:08.925 real 0m8.511s 00:09:08.925 user 0m14.597s 00:09:08.925 sys 0m1.646s 00:09:08.925 23:04:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.925 23:04:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.925 23:04:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:08.925 23:04:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:08.926 23:04:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.926 23:04:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.926 ************************************ 00:09:08.926 START TEST raid_state_function_test_sb 00:09:08.926 ************************************ 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79024 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79024' 00:09:08.926 Process raid pid: 79024 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79024 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79024 ']' 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.926 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.926 [2024-11-18 23:04:28.272184] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:08.926 [2024-11-18 23:04:28.272421] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.201 [2024-11-18 23:04:28.435192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.201 [2024-11-18 23:04:28.481044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.201 [2024-11-18 23:04:28.523256] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.201 [2024-11-18 23:04:28.523383] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.770 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.770 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:09.770 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.770 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.770 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.770 [2024-11-18 23:04:29.096991] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.770 [2024-11-18 23:04:29.097042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.770 [2024-11-18 23:04:29.097061] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.771 [2024-11-18 23:04:29.097071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.771 [2024-11-18 23:04:29.097077] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.771 [2024-11-18 23:04:29.097090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.771 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.029 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.029 "name": "Existed_Raid", 00:09:10.029 "uuid": "86c66a67-ffcb-4a2f-a528-69776e72d522", 00:09:10.029 "strip_size_kb": 0, 00:09:10.029 "state": "configuring", 00:09:10.029 "raid_level": "raid1", 00:09:10.029 "superblock": true, 00:09:10.029 "num_base_bdevs": 3, 00:09:10.029 "num_base_bdevs_discovered": 0, 00:09:10.029 "num_base_bdevs_operational": 3, 00:09:10.029 "base_bdevs_list": [ 00:09:10.029 { 00:09:10.029 "name": "BaseBdev1", 00:09:10.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.029 "is_configured": false, 00:09:10.029 "data_offset": 0, 00:09:10.029 "data_size": 0 00:09:10.029 }, 00:09:10.029 { 00:09:10.029 "name": "BaseBdev2", 00:09:10.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.029 "is_configured": false, 00:09:10.029 "data_offset": 0, 00:09:10.029 "data_size": 0 00:09:10.029 }, 00:09:10.029 { 00:09:10.029 "name": "BaseBdev3", 00:09:10.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.029 "is_configured": false, 00:09:10.029 "data_offset": 0, 00:09:10.029 "data_size": 0 00:09:10.029 } 00:09:10.029 ] 00:09:10.029 }' 00:09:10.029 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.029 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.289 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.289 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.289 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.289 [2024-11-18 23:04:29.508226] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.289 [2024-11-18 23:04:29.508322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:10.289 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.289 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.289 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.289 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.290 [2024-11-18 23:04:29.520237] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.290 [2024-11-18 23:04:29.520324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.290 [2024-11-18 23:04:29.520354] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.290 [2024-11-18 23:04:29.520376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.290 [2024-11-18 23:04:29.520393] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.290 [2024-11-18 23:04:29.520429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.290 [2024-11-18 23:04:29.541017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.290 BaseBdev1 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.290 [ 00:09:10.290 { 00:09:10.290 "name": "BaseBdev1", 00:09:10.290 "aliases": [ 00:09:10.290 "c8f79708-31be-467a-aea2-e35311b50a30" 00:09:10.290 ], 00:09:10.290 "product_name": "Malloc disk", 00:09:10.290 "block_size": 512, 00:09:10.290 "num_blocks": 65536, 00:09:10.290 "uuid": "c8f79708-31be-467a-aea2-e35311b50a30", 00:09:10.290 "assigned_rate_limits": { 00:09:10.290 "rw_ios_per_sec": 0, 00:09:10.290 "rw_mbytes_per_sec": 0, 00:09:10.290 "r_mbytes_per_sec": 0, 00:09:10.290 "w_mbytes_per_sec": 0 00:09:10.290 }, 00:09:10.290 "claimed": true, 00:09:10.290 "claim_type": "exclusive_write", 00:09:10.290 "zoned": false, 00:09:10.290 "supported_io_types": { 00:09:10.290 "read": true, 00:09:10.290 "write": true, 00:09:10.290 "unmap": true, 00:09:10.290 "flush": true, 00:09:10.290 "reset": true, 00:09:10.290 "nvme_admin": false, 00:09:10.290 "nvme_io": false, 00:09:10.290 "nvme_io_md": false, 00:09:10.290 "write_zeroes": true, 00:09:10.290 "zcopy": true, 00:09:10.290 "get_zone_info": false, 00:09:10.290 "zone_management": false, 00:09:10.290 "zone_append": false, 00:09:10.290 "compare": false, 00:09:10.290 "compare_and_write": false, 00:09:10.290 "abort": true, 00:09:10.290 "seek_hole": false, 00:09:10.290 "seek_data": false, 00:09:10.290 "copy": true, 00:09:10.290 "nvme_iov_md": false 00:09:10.290 }, 00:09:10.290 "memory_domains": [ 00:09:10.290 { 00:09:10.290 "dma_device_id": "system", 00:09:10.290 "dma_device_type": 1 00:09:10.290 }, 00:09:10.290 { 00:09:10.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.290 "dma_device_type": 2 00:09:10.290 } 00:09:10.290 ], 00:09:10.290 "driver_specific": {} 00:09:10.290 } 00:09:10.290 ] 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.290 "name": "Existed_Raid", 00:09:10.290 "uuid": "1be3e2f5-b1b1-4b22-b882-6721a6de2642", 00:09:10.290 "strip_size_kb": 0, 00:09:10.290 "state": "configuring", 00:09:10.290 "raid_level": "raid1", 00:09:10.290 "superblock": true, 00:09:10.290 "num_base_bdevs": 3, 00:09:10.290 "num_base_bdevs_discovered": 1, 00:09:10.290 "num_base_bdevs_operational": 3, 00:09:10.290 "base_bdevs_list": [ 00:09:10.290 { 00:09:10.290 "name": "BaseBdev1", 00:09:10.290 "uuid": "c8f79708-31be-467a-aea2-e35311b50a30", 00:09:10.290 "is_configured": true, 00:09:10.290 "data_offset": 2048, 00:09:10.290 "data_size": 63488 00:09:10.290 }, 00:09:10.290 { 00:09:10.290 "name": "BaseBdev2", 00:09:10.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.290 "is_configured": false, 00:09:10.290 "data_offset": 0, 00:09:10.290 "data_size": 0 00:09:10.290 }, 00:09:10.290 { 00:09:10.290 "name": "BaseBdev3", 00:09:10.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.290 "is_configured": false, 00:09:10.290 "data_offset": 0, 00:09:10.290 "data_size": 0 00:09:10.290 } 00:09:10.290 ] 00:09:10.290 }' 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.290 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.861 [2024-11-18 23:04:30.024213] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.861 [2024-11-18 23:04:30.024309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.861 [2024-11-18 23:04:30.036236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.861 [2024-11-18 23:04:30.038061] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.861 [2024-11-18 23:04:30.038136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.861 [2024-11-18 23:04:30.038169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.861 [2024-11-18 23:04:30.038208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.861 "name": "Existed_Raid", 00:09:10.861 "uuid": "14629d86-3bbb-4faa-a1a4-f1d2abba48ae", 00:09:10.861 "strip_size_kb": 0, 00:09:10.861 "state": "configuring", 00:09:10.861 "raid_level": "raid1", 00:09:10.861 "superblock": true, 00:09:10.861 "num_base_bdevs": 3, 00:09:10.861 "num_base_bdevs_discovered": 1, 00:09:10.861 "num_base_bdevs_operational": 3, 00:09:10.861 "base_bdevs_list": [ 00:09:10.861 { 00:09:10.861 "name": "BaseBdev1", 00:09:10.861 "uuid": "c8f79708-31be-467a-aea2-e35311b50a30", 00:09:10.861 "is_configured": true, 00:09:10.861 "data_offset": 2048, 00:09:10.861 "data_size": 63488 00:09:10.861 }, 00:09:10.861 { 00:09:10.861 "name": "BaseBdev2", 00:09:10.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.861 "is_configured": false, 00:09:10.861 "data_offset": 0, 00:09:10.861 "data_size": 0 00:09:10.861 }, 00:09:10.861 { 00:09:10.861 "name": "BaseBdev3", 00:09:10.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.861 "is_configured": false, 00:09:10.861 "data_offset": 0, 00:09:10.861 "data_size": 0 00:09:10.861 } 00:09:10.861 ] 00:09:10.861 }' 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.861 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.126 [2024-11-18 23:04:30.425245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.126 BaseBdev2 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.126 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.126 [ 00:09:11.126 { 00:09:11.126 "name": "BaseBdev2", 00:09:11.126 "aliases": [ 00:09:11.126 "85077bc2-68fe-4a92-ba01-37ec2ccdb483" 00:09:11.126 ], 00:09:11.126 "product_name": "Malloc disk", 00:09:11.126 "block_size": 512, 00:09:11.126 "num_blocks": 65536, 00:09:11.126 "uuid": "85077bc2-68fe-4a92-ba01-37ec2ccdb483", 00:09:11.127 "assigned_rate_limits": { 00:09:11.127 "rw_ios_per_sec": 0, 00:09:11.127 "rw_mbytes_per_sec": 0, 00:09:11.127 "r_mbytes_per_sec": 0, 00:09:11.127 "w_mbytes_per_sec": 0 00:09:11.127 }, 00:09:11.127 "claimed": true, 00:09:11.127 "claim_type": "exclusive_write", 00:09:11.127 "zoned": false, 00:09:11.127 "supported_io_types": { 00:09:11.127 "read": true, 00:09:11.127 "write": true, 00:09:11.127 "unmap": true, 00:09:11.127 "flush": true, 00:09:11.127 "reset": true, 00:09:11.127 "nvme_admin": false, 00:09:11.127 "nvme_io": false, 00:09:11.127 "nvme_io_md": false, 00:09:11.127 "write_zeroes": true, 00:09:11.127 "zcopy": true, 00:09:11.127 "get_zone_info": false, 00:09:11.127 "zone_management": false, 00:09:11.127 "zone_append": false, 00:09:11.127 "compare": false, 00:09:11.127 "compare_and_write": false, 00:09:11.127 "abort": true, 00:09:11.127 "seek_hole": false, 00:09:11.127 "seek_data": false, 00:09:11.127 "copy": true, 00:09:11.127 "nvme_iov_md": false 00:09:11.127 }, 00:09:11.127 "memory_domains": [ 00:09:11.127 { 00:09:11.127 "dma_device_id": "system", 00:09:11.127 "dma_device_type": 1 00:09:11.127 }, 00:09:11.127 { 00:09:11.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.127 "dma_device_type": 2 00:09:11.127 } 00:09:11.127 ], 00:09:11.127 "driver_specific": {} 00:09:11.127 } 00:09:11.127 ] 00:09:11.127 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.127 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:11.127 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.127 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.127 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.127 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.128 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.394 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.394 "name": "Existed_Raid", 00:09:11.394 "uuid": "14629d86-3bbb-4faa-a1a4-f1d2abba48ae", 00:09:11.394 "strip_size_kb": 0, 00:09:11.394 "state": "configuring", 00:09:11.394 "raid_level": "raid1", 00:09:11.394 "superblock": true, 00:09:11.394 "num_base_bdevs": 3, 00:09:11.394 "num_base_bdevs_discovered": 2, 00:09:11.394 "num_base_bdevs_operational": 3, 00:09:11.394 "base_bdevs_list": [ 00:09:11.394 { 00:09:11.394 "name": "BaseBdev1", 00:09:11.394 "uuid": "c8f79708-31be-467a-aea2-e35311b50a30", 00:09:11.394 "is_configured": true, 00:09:11.394 "data_offset": 2048, 00:09:11.394 "data_size": 63488 00:09:11.394 }, 00:09:11.394 { 00:09:11.394 "name": "BaseBdev2", 00:09:11.394 "uuid": "85077bc2-68fe-4a92-ba01-37ec2ccdb483", 00:09:11.394 "is_configured": true, 00:09:11.394 "data_offset": 2048, 00:09:11.394 "data_size": 63488 00:09:11.394 }, 00:09:11.394 { 00:09:11.394 "name": "BaseBdev3", 00:09:11.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.394 "is_configured": false, 00:09:11.394 "data_offset": 0, 00:09:11.394 "data_size": 0 00:09:11.394 } 00:09:11.394 ] 00:09:11.394 }' 00:09:11.394 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.394 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.654 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.655 [2024-11-18 23:04:30.871446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.655 [2024-11-18 23:04:30.871634] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:11.655 [2024-11-18 23:04:30.871653] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:11.655 [2024-11-18 23:04:30.871936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:11.655 BaseBdev3 00:09:11.655 [2024-11-18 23:04:30.872068] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:11.655 [2024-11-18 23:04:30.872079] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:11.655 [2024-11-18 23:04:30.872192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.655 [ 00:09:11.655 { 00:09:11.655 "name": "BaseBdev3", 00:09:11.655 "aliases": [ 00:09:11.655 "7602bd84-1241-444c-88e2-c492ada17298" 00:09:11.655 ], 00:09:11.655 "product_name": "Malloc disk", 00:09:11.655 "block_size": 512, 00:09:11.655 "num_blocks": 65536, 00:09:11.655 "uuid": "7602bd84-1241-444c-88e2-c492ada17298", 00:09:11.655 "assigned_rate_limits": { 00:09:11.655 "rw_ios_per_sec": 0, 00:09:11.655 "rw_mbytes_per_sec": 0, 00:09:11.655 "r_mbytes_per_sec": 0, 00:09:11.655 "w_mbytes_per_sec": 0 00:09:11.655 }, 00:09:11.655 "claimed": true, 00:09:11.655 "claim_type": "exclusive_write", 00:09:11.655 "zoned": false, 00:09:11.655 "supported_io_types": { 00:09:11.655 "read": true, 00:09:11.655 "write": true, 00:09:11.655 "unmap": true, 00:09:11.655 "flush": true, 00:09:11.655 "reset": true, 00:09:11.655 "nvme_admin": false, 00:09:11.655 "nvme_io": false, 00:09:11.655 "nvme_io_md": false, 00:09:11.655 "write_zeroes": true, 00:09:11.655 "zcopy": true, 00:09:11.655 "get_zone_info": false, 00:09:11.655 "zone_management": false, 00:09:11.655 "zone_append": false, 00:09:11.655 "compare": false, 00:09:11.655 "compare_and_write": false, 00:09:11.655 "abort": true, 00:09:11.655 "seek_hole": false, 00:09:11.655 "seek_data": false, 00:09:11.655 "copy": true, 00:09:11.655 "nvme_iov_md": false 00:09:11.655 }, 00:09:11.655 "memory_domains": [ 00:09:11.655 { 00:09:11.655 "dma_device_id": "system", 00:09:11.655 "dma_device_type": 1 00:09:11.655 }, 00:09:11.655 { 00:09:11.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.655 "dma_device_type": 2 00:09:11.655 } 00:09:11.655 ], 00:09:11.655 "driver_specific": {} 00:09:11.655 } 00:09:11.655 ] 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.655 "name": "Existed_Raid", 00:09:11.655 "uuid": "14629d86-3bbb-4faa-a1a4-f1d2abba48ae", 00:09:11.655 "strip_size_kb": 0, 00:09:11.655 "state": "online", 00:09:11.655 "raid_level": "raid1", 00:09:11.655 "superblock": true, 00:09:11.655 "num_base_bdevs": 3, 00:09:11.655 "num_base_bdevs_discovered": 3, 00:09:11.655 "num_base_bdevs_operational": 3, 00:09:11.655 "base_bdevs_list": [ 00:09:11.655 { 00:09:11.655 "name": "BaseBdev1", 00:09:11.655 "uuid": "c8f79708-31be-467a-aea2-e35311b50a30", 00:09:11.655 "is_configured": true, 00:09:11.655 "data_offset": 2048, 00:09:11.655 "data_size": 63488 00:09:11.655 }, 00:09:11.655 { 00:09:11.655 "name": "BaseBdev2", 00:09:11.655 "uuid": "85077bc2-68fe-4a92-ba01-37ec2ccdb483", 00:09:11.655 "is_configured": true, 00:09:11.655 "data_offset": 2048, 00:09:11.655 "data_size": 63488 00:09:11.655 }, 00:09:11.655 { 00:09:11.655 "name": "BaseBdev3", 00:09:11.655 "uuid": "7602bd84-1241-444c-88e2-c492ada17298", 00:09:11.655 "is_configured": true, 00:09:11.655 "data_offset": 2048, 00:09:11.655 "data_size": 63488 00:09:11.655 } 00:09:11.655 ] 00:09:11.655 }' 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.655 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.226 [2024-11-18 23:04:31.311036] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.226 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.226 "name": "Existed_Raid", 00:09:12.226 "aliases": [ 00:09:12.226 "14629d86-3bbb-4faa-a1a4-f1d2abba48ae" 00:09:12.226 ], 00:09:12.226 "product_name": "Raid Volume", 00:09:12.227 "block_size": 512, 00:09:12.227 "num_blocks": 63488, 00:09:12.227 "uuid": "14629d86-3bbb-4faa-a1a4-f1d2abba48ae", 00:09:12.227 "assigned_rate_limits": { 00:09:12.227 "rw_ios_per_sec": 0, 00:09:12.227 "rw_mbytes_per_sec": 0, 00:09:12.227 "r_mbytes_per_sec": 0, 00:09:12.227 "w_mbytes_per_sec": 0 00:09:12.227 }, 00:09:12.227 "claimed": false, 00:09:12.227 "zoned": false, 00:09:12.227 "supported_io_types": { 00:09:12.227 "read": true, 00:09:12.227 "write": true, 00:09:12.227 "unmap": false, 00:09:12.227 "flush": false, 00:09:12.227 "reset": true, 00:09:12.227 "nvme_admin": false, 00:09:12.227 "nvme_io": false, 00:09:12.227 "nvme_io_md": false, 00:09:12.227 "write_zeroes": true, 00:09:12.227 "zcopy": false, 00:09:12.227 "get_zone_info": false, 00:09:12.227 "zone_management": false, 00:09:12.227 "zone_append": false, 00:09:12.227 "compare": false, 00:09:12.227 "compare_and_write": false, 00:09:12.227 "abort": false, 00:09:12.227 "seek_hole": false, 00:09:12.227 "seek_data": false, 00:09:12.227 "copy": false, 00:09:12.227 "nvme_iov_md": false 00:09:12.227 }, 00:09:12.227 "memory_domains": [ 00:09:12.227 { 00:09:12.227 "dma_device_id": "system", 00:09:12.227 "dma_device_type": 1 00:09:12.227 }, 00:09:12.227 { 00:09:12.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.227 "dma_device_type": 2 00:09:12.227 }, 00:09:12.227 { 00:09:12.227 "dma_device_id": "system", 00:09:12.227 "dma_device_type": 1 00:09:12.227 }, 00:09:12.227 { 00:09:12.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.227 "dma_device_type": 2 00:09:12.227 }, 00:09:12.227 { 00:09:12.227 "dma_device_id": "system", 00:09:12.227 "dma_device_type": 1 00:09:12.227 }, 00:09:12.227 { 00:09:12.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.227 "dma_device_type": 2 00:09:12.227 } 00:09:12.227 ], 00:09:12.227 "driver_specific": { 00:09:12.227 "raid": { 00:09:12.227 "uuid": "14629d86-3bbb-4faa-a1a4-f1d2abba48ae", 00:09:12.227 "strip_size_kb": 0, 00:09:12.227 "state": "online", 00:09:12.227 "raid_level": "raid1", 00:09:12.227 "superblock": true, 00:09:12.227 "num_base_bdevs": 3, 00:09:12.227 "num_base_bdevs_discovered": 3, 00:09:12.227 "num_base_bdevs_operational": 3, 00:09:12.227 "base_bdevs_list": [ 00:09:12.227 { 00:09:12.227 "name": "BaseBdev1", 00:09:12.227 "uuid": "c8f79708-31be-467a-aea2-e35311b50a30", 00:09:12.227 "is_configured": true, 00:09:12.227 "data_offset": 2048, 00:09:12.227 "data_size": 63488 00:09:12.227 }, 00:09:12.227 { 00:09:12.227 "name": "BaseBdev2", 00:09:12.227 "uuid": "85077bc2-68fe-4a92-ba01-37ec2ccdb483", 00:09:12.227 "is_configured": true, 00:09:12.227 "data_offset": 2048, 00:09:12.227 "data_size": 63488 00:09:12.227 }, 00:09:12.227 { 00:09:12.227 "name": "BaseBdev3", 00:09:12.227 "uuid": "7602bd84-1241-444c-88e2-c492ada17298", 00:09:12.227 "is_configured": true, 00:09:12.227 "data_offset": 2048, 00:09:12.227 "data_size": 63488 00:09:12.227 } 00:09:12.227 ] 00:09:12.227 } 00:09:12.227 } 00:09:12.227 }' 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:12.227 BaseBdev2 00:09:12.227 BaseBdev3' 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.227 [2024-11-18 23:04:31.582365] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.227 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.487 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.487 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.487 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.487 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.487 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.487 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.487 "name": "Existed_Raid", 00:09:12.487 "uuid": "14629d86-3bbb-4faa-a1a4-f1d2abba48ae", 00:09:12.487 "strip_size_kb": 0, 00:09:12.487 "state": "online", 00:09:12.487 "raid_level": "raid1", 00:09:12.487 "superblock": true, 00:09:12.487 "num_base_bdevs": 3, 00:09:12.487 "num_base_bdevs_discovered": 2, 00:09:12.487 "num_base_bdevs_operational": 2, 00:09:12.487 "base_bdevs_list": [ 00:09:12.487 { 00:09:12.487 "name": null, 00:09:12.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.487 "is_configured": false, 00:09:12.487 "data_offset": 0, 00:09:12.487 "data_size": 63488 00:09:12.487 }, 00:09:12.487 { 00:09:12.487 "name": "BaseBdev2", 00:09:12.487 "uuid": "85077bc2-68fe-4a92-ba01-37ec2ccdb483", 00:09:12.487 "is_configured": true, 00:09:12.487 "data_offset": 2048, 00:09:12.487 "data_size": 63488 00:09:12.487 }, 00:09:12.487 { 00:09:12.487 "name": "BaseBdev3", 00:09:12.487 "uuid": "7602bd84-1241-444c-88e2-c492ada17298", 00:09:12.487 "is_configured": true, 00:09:12.487 "data_offset": 2048, 00:09:12.487 "data_size": 63488 00:09:12.487 } 00:09:12.487 ] 00:09:12.487 }' 00:09:12.487 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.487 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.753 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:12.753 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.753 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.753 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.753 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.753 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.753 [2024-11-18 23:04:32.048749] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.753 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.753 [2024-11-18 23:04:32.115933] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.753 [2024-11-18 23:04:32.116033] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.753 [2024-11-18 23:04:32.127539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.753 [2024-11-18 23:04:32.127601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.753 [2024-11-18 23:04:32.127617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.013 BaseBdev2 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.013 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.013 [ 00:09:13.013 { 00:09:13.013 "name": "BaseBdev2", 00:09:13.013 "aliases": [ 00:09:13.013 "53aa1df0-9e81-4c94-bd31-c75c2c720826" 00:09:13.013 ], 00:09:13.013 "product_name": "Malloc disk", 00:09:13.013 "block_size": 512, 00:09:13.013 "num_blocks": 65536, 00:09:13.013 "uuid": "53aa1df0-9e81-4c94-bd31-c75c2c720826", 00:09:13.013 "assigned_rate_limits": { 00:09:13.013 "rw_ios_per_sec": 0, 00:09:13.013 "rw_mbytes_per_sec": 0, 00:09:13.013 "r_mbytes_per_sec": 0, 00:09:13.013 "w_mbytes_per_sec": 0 00:09:13.013 }, 00:09:13.013 "claimed": false, 00:09:13.013 "zoned": false, 00:09:13.013 "supported_io_types": { 00:09:13.013 "read": true, 00:09:13.013 "write": true, 00:09:13.013 "unmap": true, 00:09:13.013 "flush": true, 00:09:13.013 "reset": true, 00:09:13.013 "nvme_admin": false, 00:09:13.013 "nvme_io": false, 00:09:13.013 "nvme_io_md": false, 00:09:13.013 "write_zeroes": true, 00:09:13.013 "zcopy": true, 00:09:13.013 "get_zone_info": false, 00:09:13.013 "zone_management": false, 00:09:13.013 "zone_append": false, 00:09:13.013 "compare": false, 00:09:13.013 "compare_and_write": false, 00:09:13.013 "abort": true, 00:09:13.013 "seek_hole": false, 00:09:13.013 "seek_data": false, 00:09:13.014 "copy": true, 00:09:13.014 "nvme_iov_md": false 00:09:13.014 }, 00:09:13.014 "memory_domains": [ 00:09:13.014 { 00:09:13.014 "dma_device_id": "system", 00:09:13.014 "dma_device_type": 1 00:09:13.014 }, 00:09:13.014 { 00:09:13.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.014 "dma_device_type": 2 00:09:13.014 } 00:09:13.014 ], 00:09:13.014 "driver_specific": {} 00:09:13.014 } 00:09:13.014 ] 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.014 BaseBdev3 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.014 [ 00:09:13.014 { 00:09:13.014 "name": "BaseBdev3", 00:09:13.014 "aliases": [ 00:09:13.014 "8e5cb491-dee1-46cf-a902-9e6802496d9d" 00:09:13.014 ], 00:09:13.014 "product_name": "Malloc disk", 00:09:13.014 "block_size": 512, 00:09:13.014 "num_blocks": 65536, 00:09:13.014 "uuid": "8e5cb491-dee1-46cf-a902-9e6802496d9d", 00:09:13.014 "assigned_rate_limits": { 00:09:13.014 "rw_ios_per_sec": 0, 00:09:13.014 "rw_mbytes_per_sec": 0, 00:09:13.014 "r_mbytes_per_sec": 0, 00:09:13.014 "w_mbytes_per_sec": 0 00:09:13.014 }, 00:09:13.014 "claimed": false, 00:09:13.014 "zoned": false, 00:09:13.014 "supported_io_types": { 00:09:13.014 "read": true, 00:09:13.014 "write": true, 00:09:13.014 "unmap": true, 00:09:13.014 "flush": true, 00:09:13.014 "reset": true, 00:09:13.014 "nvme_admin": false, 00:09:13.014 "nvme_io": false, 00:09:13.014 "nvme_io_md": false, 00:09:13.014 "write_zeroes": true, 00:09:13.014 "zcopy": true, 00:09:13.014 "get_zone_info": false, 00:09:13.014 "zone_management": false, 00:09:13.014 "zone_append": false, 00:09:13.014 "compare": false, 00:09:13.014 "compare_and_write": false, 00:09:13.014 "abort": true, 00:09:13.014 "seek_hole": false, 00:09:13.014 "seek_data": false, 00:09:13.014 "copy": true, 00:09:13.014 "nvme_iov_md": false 00:09:13.014 }, 00:09:13.014 "memory_domains": [ 00:09:13.014 { 00:09:13.014 "dma_device_id": "system", 00:09:13.014 "dma_device_type": 1 00:09:13.014 }, 00:09:13.014 { 00:09:13.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.014 "dma_device_type": 2 00:09:13.014 } 00:09:13.014 ], 00:09:13.014 "driver_specific": {} 00:09:13.014 } 00:09:13.014 ] 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.014 [2024-11-18 23:04:32.290766] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.014 [2024-11-18 23:04:32.290850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.014 [2024-11-18 23:04:32.290889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.014 [2024-11-18 23:04:32.292755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.014 "name": "Existed_Raid", 00:09:13.014 "uuid": "73a1f183-0f36-4cf3-beb1-6db2af328210", 00:09:13.014 "strip_size_kb": 0, 00:09:13.014 "state": "configuring", 00:09:13.014 "raid_level": "raid1", 00:09:13.014 "superblock": true, 00:09:13.014 "num_base_bdevs": 3, 00:09:13.014 "num_base_bdevs_discovered": 2, 00:09:13.014 "num_base_bdevs_operational": 3, 00:09:13.014 "base_bdevs_list": [ 00:09:13.014 { 00:09:13.014 "name": "BaseBdev1", 00:09:13.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.014 "is_configured": false, 00:09:13.014 "data_offset": 0, 00:09:13.014 "data_size": 0 00:09:13.014 }, 00:09:13.014 { 00:09:13.014 "name": "BaseBdev2", 00:09:13.014 "uuid": "53aa1df0-9e81-4c94-bd31-c75c2c720826", 00:09:13.014 "is_configured": true, 00:09:13.014 "data_offset": 2048, 00:09:13.014 "data_size": 63488 00:09:13.014 }, 00:09:13.014 { 00:09:13.014 "name": "BaseBdev3", 00:09:13.014 "uuid": "8e5cb491-dee1-46cf-a902-9e6802496d9d", 00:09:13.014 "is_configured": true, 00:09:13.014 "data_offset": 2048, 00:09:13.014 "data_size": 63488 00:09:13.014 } 00:09:13.014 ] 00:09:13.014 }' 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.014 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.583 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:13.583 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.583 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.583 [2024-11-18 23:04:32.725998] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:13.583 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.583 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.583 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.583 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.583 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.583 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.583 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.583 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.584 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.584 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.584 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.584 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.584 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.584 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.584 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.584 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.584 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.584 "name": "Existed_Raid", 00:09:13.584 "uuid": "73a1f183-0f36-4cf3-beb1-6db2af328210", 00:09:13.584 "strip_size_kb": 0, 00:09:13.584 "state": "configuring", 00:09:13.584 "raid_level": "raid1", 00:09:13.584 "superblock": true, 00:09:13.584 "num_base_bdevs": 3, 00:09:13.584 "num_base_bdevs_discovered": 1, 00:09:13.584 "num_base_bdevs_operational": 3, 00:09:13.584 "base_bdevs_list": [ 00:09:13.584 { 00:09:13.584 "name": "BaseBdev1", 00:09:13.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.584 "is_configured": false, 00:09:13.584 "data_offset": 0, 00:09:13.584 "data_size": 0 00:09:13.584 }, 00:09:13.584 { 00:09:13.584 "name": null, 00:09:13.584 "uuid": "53aa1df0-9e81-4c94-bd31-c75c2c720826", 00:09:13.584 "is_configured": false, 00:09:13.584 "data_offset": 0, 00:09:13.584 "data_size": 63488 00:09:13.584 }, 00:09:13.584 { 00:09:13.584 "name": "BaseBdev3", 00:09:13.584 "uuid": "8e5cb491-dee1-46cf-a902-9e6802496d9d", 00:09:13.584 "is_configured": true, 00:09:13.584 "data_offset": 2048, 00:09:13.584 "data_size": 63488 00:09:13.584 } 00:09:13.584 ] 00:09:13.584 }' 00:09:13.584 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.584 23:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.844 [2024-11-18 23:04:33.200122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.844 BaseBdev1 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.844 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.103 [ 00:09:14.103 { 00:09:14.103 "name": "BaseBdev1", 00:09:14.103 "aliases": [ 00:09:14.103 "07697792-bf8b-4f46-bf7d-6e17e8167b0a" 00:09:14.103 ], 00:09:14.103 "product_name": "Malloc disk", 00:09:14.103 "block_size": 512, 00:09:14.103 "num_blocks": 65536, 00:09:14.103 "uuid": "07697792-bf8b-4f46-bf7d-6e17e8167b0a", 00:09:14.103 "assigned_rate_limits": { 00:09:14.103 "rw_ios_per_sec": 0, 00:09:14.103 "rw_mbytes_per_sec": 0, 00:09:14.103 "r_mbytes_per_sec": 0, 00:09:14.103 "w_mbytes_per_sec": 0 00:09:14.103 }, 00:09:14.103 "claimed": true, 00:09:14.103 "claim_type": "exclusive_write", 00:09:14.103 "zoned": false, 00:09:14.103 "supported_io_types": { 00:09:14.103 "read": true, 00:09:14.103 "write": true, 00:09:14.103 "unmap": true, 00:09:14.103 "flush": true, 00:09:14.103 "reset": true, 00:09:14.103 "nvme_admin": false, 00:09:14.103 "nvme_io": false, 00:09:14.103 "nvme_io_md": false, 00:09:14.103 "write_zeroes": true, 00:09:14.103 "zcopy": true, 00:09:14.103 "get_zone_info": false, 00:09:14.103 "zone_management": false, 00:09:14.103 "zone_append": false, 00:09:14.103 "compare": false, 00:09:14.103 "compare_and_write": false, 00:09:14.103 "abort": true, 00:09:14.103 "seek_hole": false, 00:09:14.103 "seek_data": false, 00:09:14.103 "copy": true, 00:09:14.103 "nvme_iov_md": false 00:09:14.103 }, 00:09:14.103 "memory_domains": [ 00:09:14.103 { 00:09:14.103 "dma_device_id": "system", 00:09:14.103 "dma_device_type": 1 00:09:14.103 }, 00:09:14.103 { 00:09:14.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.103 "dma_device_type": 2 00:09:14.103 } 00:09:14.103 ], 00:09:14.103 "driver_specific": {} 00:09:14.103 } 00:09:14.103 ] 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.103 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.103 "name": "Existed_Raid", 00:09:14.103 "uuid": "73a1f183-0f36-4cf3-beb1-6db2af328210", 00:09:14.103 "strip_size_kb": 0, 00:09:14.103 "state": "configuring", 00:09:14.103 "raid_level": "raid1", 00:09:14.103 "superblock": true, 00:09:14.103 "num_base_bdevs": 3, 00:09:14.103 "num_base_bdevs_discovered": 2, 00:09:14.103 "num_base_bdevs_operational": 3, 00:09:14.103 "base_bdevs_list": [ 00:09:14.103 { 00:09:14.103 "name": "BaseBdev1", 00:09:14.103 "uuid": "07697792-bf8b-4f46-bf7d-6e17e8167b0a", 00:09:14.103 "is_configured": true, 00:09:14.103 "data_offset": 2048, 00:09:14.103 "data_size": 63488 00:09:14.103 }, 00:09:14.103 { 00:09:14.103 "name": null, 00:09:14.104 "uuid": "53aa1df0-9e81-4c94-bd31-c75c2c720826", 00:09:14.104 "is_configured": false, 00:09:14.104 "data_offset": 0, 00:09:14.104 "data_size": 63488 00:09:14.104 }, 00:09:14.104 { 00:09:14.104 "name": "BaseBdev3", 00:09:14.104 "uuid": "8e5cb491-dee1-46cf-a902-9e6802496d9d", 00:09:14.104 "is_configured": true, 00:09:14.104 "data_offset": 2048, 00:09:14.104 "data_size": 63488 00:09:14.104 } 00:09:14.104 ] 00:09:14.104 }' 00:09:14.104 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.104 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.362 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.362 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.362 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.362 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.362 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.622 [2024-11-18 23:04:33.751268] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.622 "name": "Existed_Raid", 00:09:14.622 "uuid": "73a1f183-0f36-4cf3-beb1-6db2af328210", 00:09:14.622 "strip_size_kb": 0, 00:09:14.622 "state": "configuring", 00:09:14.622 "raid_level": "raid1", 00:09:14.622 "superblock": true, 00:09:14.622 "num_base_bdevs": 3, 00:09:14.622 "num_base_bdevs_discovered": 1, 00:09:14.622 "num_base_bdevs_operational": 3, 00:09:14.622 "base_bdevs_list": [ 00:09:14.622 { 00:09:14.622 "name": "BaseBdev1", 00:09:14.622 "uuid": "07697792-bf8b-4f46-bf7d-6e17e8167b0a", 00:09:14.622 "is_configured": true, 00:09:14.622 "data_offset": 2048, 00:09:14.622 "data_size": 63488 00:09:14.622 }, 00:09:14.622 { 00:09:14.622 "name": null, 00:09:14.622 "uuid": "53aa1df0-9e81-4c94-bd31-c75c2c720826", 00:09:14.622 "is_configured": false, 00:09:14.622 "data_offset": 0, 00:09:14.622 "data_size": 63488 00:09:14.622 }, 00:09:14.622 { 00:09:14.622 "name": null, 00:09:14.622 "uuid": "8e5cb491-dee1-46cf-a902-9e6802496d9d", 00:09:14.622 "is_configured": false, 00:09:14.622 "data_offset": 0, 00:09:14.622 "data_size": 63488 00:09:14.622 } 00:09:14.622 ] 00:09:14.622 }' 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.622 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.883 [2024-11-18 23:04:34.174554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.883 "name": "Existed_Raid", 00:09:14.883 "uuid": "73a1f183-0f36-4cf3-beb1-6db2af328210", 00:09:14.883 "strip_size_kb": 0, 00:09:14.883 "state": "configuring", 00:09:14.883 "raid_level": "raid1", 00:09:14.883 "superblock": true, 00:09:14.883 "num_base_bdevs": 3, 00:09:14.883 "num_base_bdevs_discovered": 2, 00:09:14.883 "num_base_bdevs_operational": 3, 00:09:14.883 "base_bdevs_list": [ 00:09:14.883 { 00:09:14.883 "name": "BaseBdev1", 00:09:14.883 "uuid": "07697792-bf8b-4f46-bf7d-6e17e8167b0a", 00:09:14.883 "is_configured": true, 00:09:14.883 "data_offset": 2048, 00:09:14.883 "data_size": 63488 00:09:14.883 }, 00:09:14.883 { 00:09:14.883 "name": null, 00:09:14.883 "uuid": "53aa1df0-9e81-4c94-bd31-c75c2c720826", 00:09:14.883 "is_configured": false, 00:09:14.883 "data_offset": 0, 00:09:14.883 "data_size": 63488 00:09:14.883 }, 00:09:14.883 { 00:09:14.883 "name": "BaseBdev3", 00:09:14.883 "uuid": "8e5cb491-dee1-46cf-a902-9e6802496d9d", 00:09:14.883 "is_configured": true, 00:09:14.883 "data_offset": 2048, 00:09:14.883 "data_size": 63488 00:09:14.883 } 00:09:14.883 ] 00:09:14.883 }' 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.883 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.451 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:15.451 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.451 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.451 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.451 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.452 [2024-11-18 23:04:34.585841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.452 "name": "Existed_Raid", 00:09:15.452 "uuid": "73a1f183-0f36-4cf3-beb1-6db2af328210", 00:09:15.452 "strip_size_kb": 0, 00:09:15.452 "state": "configuring", 00:09:15.452 "raid_level": "raid1", 00:09:15.452 "superblock": true, 00:09:15.452 "num_base_bdevs": 3, 00:09:15.452 "num_base_bdevs_discovered": 1, 00:09:15.452 "num_base_bdevs_operational": 3, 00:09:15.452 "base_bdevs_list": [ 00:09:15.452 { 00:09:15.452 "name": null, 00:09:15.452 "uuid": "07697792-bf8b-4f46-bf7d-6e17e8167b0a", 00:09:15.452 "is_configured": false, 00:09:15.452 "data_offset": 0, 00:09:15.452 "data_size": 63488 00:09:15.452 }, 00:09:15.452 { 00:09:15.452 "name": null, 00:09:15.452 "uuid": "53aa1df0-9e81-4c94-bd31-c75c2c720826", 00:09:15.452 "is_configured": false, 00:09:15.452 "data_offset": 0, 00:09:15.452 "data_size": 63488 00:09:15.452 }, 00:09:15.452 { 00:09:15.452 "name": "BaseBdev3", 00:09:15.452 "uuid": "8e5cb491-dee1-46cf-a902-9e6802496d9d", 00:09:15.452 "is_configured": true, 00:09:15.452 "data_offset": 2048, 00:09:15.452 "data_size": 63488 00:09:15.452 } 00:09:15.452 ] 00:09:15.452 }' 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.452 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.711 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.711 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.711 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.970 [2024-11-18 23:04:35.095380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.970 "name": "Existed_Raid", 00:09:15.970 "uuid": "73a1f183-0f36-4cf3-beb1-6db2af328210", 00:09:15.970 "strip_size_kb": 0, 00:09:15.970 "state": "configuring", 00:09:15.970 "raid_level": "raid1", 00:09:15.970 "superblock": true, 00:09:15.970 "num_base_bdevs": 3, 00:09:15.970 "num_base_bdevs_discovered": 2, 00:09:15.970 "num_base_bdevs_operational": 3, 00:09:15.970 "base_bdevs_list": [ 00:09:15.970 { 00:09:15.970 "name": null, 00:09:15.970 "uuid": "07697792-bf8b-4f46-bf7d-6e17e8167b0a", 00:09:15.970 "is_configured": false, 00:09:15.970 "data_offset": 0, 00:09:15.970 "data_size": 63488 00:09:15.970 }, 00:09:15.970 { 00:09:15.970 "name": "BaseBdev2", 00:09:15.970 "uuid": "53aa1df0-9e81-4c94-bd31-c75c2c720826", 00:09:15.970 "is_configured": true, 00:09:15.970 "data_offset": 2048, 00:09:15.970 "data_size": 63488 00:09:15.970 }, 00:09:15.970 { 00:09:15.970 "name": "BaseBdev3", 00:09:15.970 "uuid": "8e5cb491-dee1-46cf-a902-9e6802496d9d", 00:09:15.970 "is_configured": true, 00:09:15.970 "data_offset": 2048, 00:09:15.970 "data_size": 63488 00:09:15.970 } 00:09:15.970 ] 00:09:15.970 }' 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.970 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 07697792-bf8b-4f46-bf7d-6e17e8167b0a 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.230 [2024-11-18 23:04:35.589431] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:16.230 [2024-11-18 23:04:35.589596] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:16.230 [2024-11-18 23:04:35.589609] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:16.230 [2024-11-18 23:04:35.589847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:16.230 [2024-11-18 23:04:35.589974] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:16.230 [2024-11-18 23:04:35.589988] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:16.230 NewBaseBdev 00:09:16.230 [2024-11-18 23:04:35.590081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.230 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.489 [ 00:09:16.489 { 00:09:16.489 "name": "NewBaseBdev", 00:09:16.489 "aliases": [ 00:09:16.489 "07697792-bf8b-4f46-bf7d-6e17e8167b0a" 00:09:16.489 ], 00:09:16.489 "product_name": "Malloc disk", 00:09:16.489 "block_size": 512, 00:09:16.489 "num_blocks": 65536, 00:09:16.489 "uuid": "07697792-bf8b-4f46-bf7d-6e17e8167b0a", 00:09:16.489 "assigned_rate_limits": { 00:09:16.489 "rw_ios_per_sec": 0, 00:09:16.489 "rw_mbytes_per_sec": 0, 00:09:16.489 "r_mbytes_per_sec": 0, 00:09:16.489 "w_mbytes_per_sec": 0 00:09:16.489 }, 00:09:16.489 "claimed": true, 00:09:16.489 "claim_type": "exclusive_write", 00:09:16.489 "zoned": false, 00:09:16.489 "supported_io_types": { 00:09:16.489 "read": true, 00:09:16.489 "write": true, 00:09:16.489 "unmap": true, 00:09:16.489 "flush": true, 00:09:16.489 "reset": true, 00:09:16.489 "nvme_admin": false, 00:09:16.489 "nvme_io": false, 00:09:16.489 "nvme_io_md": false, 00:09:16.489 "write_zeroes": true, 00:09:16.489 "zcopy": true, 00:09:16.489 "get_zone_info": false, 00:09:16.489 "zone_management": false, 00:09:16.489 "zone_append": false, 00:09:16.489 "compare": false, 00:09:16.489 "compare_and_write": false, 00:09:16.489 "abort": true, 00:09:16.489 "seek_hole": false, 00:09:16.489 "seek_data": false, 00:09:16.489 "copy": true, 00:09:16.489 "nvme_iov_md": false 00:09:16.489 }, 00:09:16.489 "memory_domains": [ 00:09:16.489 { 00:09:16.489 "dma_device_id": "system", 00:09:16.489 "dma_device_type": 1 00:09:16.489 }, 00:09:16.489 { 00:09:16.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.489 "dma_device_type": 2 00:09:16.489 } 00:09:16.489 ], 00:09:16.489 "driver_specific": {} 00:09:16.489 } 00:09:16.489 ] 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.489 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.489 "name": "Existed_Raid", 00:09:16.489 "uuid": "73a1f183-0f36-4cf3-beb1-6db2af328210", 00:09:16.489 "strip_size_kb": 0, 00:09:16.489 "state": "online", 00:09:16.489 "raid_level": "raid1", 00:09:16.489 "superblock": true, 00:09:16.489 "num_base_bdevs": 3, 00:09:16.490 "num_base_bdevs_discovered": 3, 00:09:16.490 "num_base_bdevs_operational": 3, 00:09:16.490 "base_bdevs_list": [ 00:09:16.490 { 00:09:16.490 "name": "NewBaseBdev", 00:09:16.490 "uuid": "07697792-bf8b-4f46-bf7d-6e17e8167b0a", 00:09:16.490 "is_configured": true, 00:09:16.490 "data_offset": 2048, 00:09:16.490 "data_size": 63488 00:09:16.490 }, 00:09:16.490 { 00:09:16.490 "name": "BaseBdev2", 00:09:16.490 "uuid": "53aa1df0-9e81-4c94-bd31-c75c2c720826", 00:09:16.490 "is_configured": true, 00:09:16.490 "data_offset": 2048, 00:09:16.490 "data_size": 63488 00:09:16.490 }, 00:09:16.490 { 00:09:16.490 "name": "BaseBdev3", 00:09:16.490 "uuid": "8e5cb491-dee1-46cf-a902-9e6802496d9d", 00:09:16.490 "is_configured": true, 00:09:16.490 "data_offset": 2048, 00:09:16.490 "data_size": 63488 00:09:16.490 } 00:09:16.490 ] 00:09:16.490 }' 00:09:16.490 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.490 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.760 [2024-11-18 23:04:36.076948] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.760 "name": "Existed_Raid", 00:09:16.760 "aliases": [ 00:09:16.760 "73a1f183-0f36-4cf3-beb1-6db2af328210" 00:09:16.760 ], 00:09:16.760 "product_name": "Raid Volume", 00:09:16.760 "block_size": 512, 00:09:16.760 "num_blocks": 63488, 00:09:16.760 "uuid": "73a1f183-0f36-4cf3-beb1-6db2af328210", 00:09:16.760 "assigned_rate_limits": { 00:09:16.760 "rw_ios_per_sec": 0, 00:09:16.760 "rw_mbytes_per_sec": 0, 00:09:16.760 "r_mbytes_per_sec": 0, 00:09:16.760 "w_mbytes_per_sec": 0 00:09:16.760 }, 00:09:16.760 "claimed": false, 00:09:16.760 "zoned": false, 00:09:16.760 "supported_io_types": { 00:09:16.760 "read": true, 00:09:16.760 "write": true, 00:09:16.760 "unmap": false, 00:09:16.760 "flush": false, 00:09:16.760 "reset": true, 00:09:16.760 "nvme_admin": false, 00:09:16.760 "nvme_io": false, 00:09:16.760 "nvme_io_md": false, 00:09:16.760 "write_zeroes": true, 00:09:16.760 "zcopy": false, 00:09:16.760 "get_zone_info": false, 00:09:16.760 "zone_management": false, 00:09:16.760 "zone_append": false, 00:09:16.760 "compare": false, 00:09:16.760 "compare_and_write": false, 00:09:16.760 "abort": false, 00:09:16.760 "seek_hole": false, 00:09:16.760 "seek_data": false, 00:09:16.760 "copy": false, 00:09:16.760 "nvme_iov_md": false 00:09:16.760 }, 00:09:16.760 "memory_domains": [ 00:09:16.760 { 00:09:16.760 "dma_device_id": "system", 00:09:16.760 "dma_device_type": 1 00:09:16.760 }, 00:09:16.760 { 00:09:16.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.760 "dma_device_type": 2 00:09:16.760 }, 00:09:16.760 { 00:09:16.760 "dma_device_id": "system", 00:09:16.760 "dma_device_type": 1 00:09:16.760 }, 00:09:16.760 { 00:09:16.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.760 "dma_device_type": 2 00:09:16.760 }, 00:09:16.760 { 00:09:16.760 "dma_device_id": "system", 00:09:16.760 "dma_device_type": 1 00:09:16.760 }, 00:09:16.760 { 00:09:16.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.760 "dma_device_type": 2 00:09:16.760 } 00:09:16.760 ], 00:09:16.760 "driver_specific": { 00:09:16.760 "raid": { 00:09:16.760 "uuid": "73a1f183-0f36-4cf3-beb1-6db2af328210", 00:09:16.760 "strip_size_kb": 0, 00:09:16.760 "state": "online", 00:09:16.760 "raid_level": "raid1", 00:09:16.760 "superblock": true, 00:09:16.760 "num_base_bdevs": 3, 00:09:16.760 "num_base_bdevs_discovered": 3, 00:09:16.760 "num_base_bdevs_operational": 3, 00:09:16.760 "base_bdevs_list": [ 00:09:16.760 { 00:09:16.760 "name": "NewBaseBdev", 00:09:16.760 "uuid": "07697792-bf8b-4f46-bf7d-6e17e8167b0a", 00:09:16.760 "is_configured": true, 00:09:16.760 "data_offset": 2048, 00:09:16.760 "data_size": 63488 00:09:16.760 }, 00:09:16.760 { 00:09:16.760 "name": "BaseBdev2", 00:09:16.760 "uuid": "53aa1df0-9e81-4c94-bd31-c75c2c720826", 00:09:16.760 "is_configured": true, 00:09:16.760 "data_offset": 2048, 00:09:16.760 "data_size": 63488 00:09:16.760 }, 00:09:16.760 { 00:09:16.760 "name": "BaseBdev3", 00:09:16.760 "uuid": "8e5cb491-dee1-46cf-a902-9e6802496d9d", 00:09:16.760 "is_configured": true, 00:09:16.760 "data_offset": 2048, 00:09:16.760 "data_size": 63488 00:09:16.760 } 00:09:16.760 ] 00:09:16.760 } 00:09:16.760 } 00:09:16.760 }' 00:09:16.760 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:17.039 BaseBdev2 00:09:17.039 BaseBdev3' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.039 [2024-11-18 23:04:36.348179] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.039 [2024-11-18 23:04:36.348208] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.039 [2024-11-18 23:04:36.348271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.039 [2024-11-18 23:04:36.348524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.039 [2024-11-18 23:04:36.348536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79024 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79024 ']' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79024 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79024 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79024' 00:09:17.039 killing process with pid 79024 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79024 00:09:17.039 [2024-11-18 23:04:36.397196] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.039 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79024 00:09:17.300 [2024-11-18 23:04:36.427873] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.300 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:17.300 ************************************ 00:09:17.300 END TEST raid_state_function_test_sb 00:09:17.300 ************************************ 00:09:17.300 00:09:17.300 real 0m8.483s 00:09:17.300 user 0m14.545s 00:09:17.300 sys 0m1.675s 00:09:17.300 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.300 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.570 23:04:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:17.570 23:04:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:17.570 23:04:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.570 23:04:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.570 ************************************ 00:09:17.570 START TEST raid_superblock_test 00:09:17.570 ************************************ 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79622 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79622 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79622 ']' 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.570 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.570 [2024-11-18 23:04:36.820697] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:17.570 [2024-11-18 23:04:36.820906] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79622 ] 00:09:17.831 [2024-11-18 23:04:36.978143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.831 [2024-11-18 23:04:37.022666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.831 [2024-11-18 23:04:37.064442] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.831 [2024-11-18 23:04:37.064556] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.399 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.399 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:18.399 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:18.399 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.399 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:18.399 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:18.399 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:18.399 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:18.399 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:18.399 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:18.399 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.400 malloc1 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.400 [2024-11-18 23:04:37.646685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:18.400 [2024-11-18 23:04:37.646770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.400 [2024-11-18 23:04:37.646791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:18.400 [2024-11-18 23:04:37.646804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.400 [2024-11-18 23:04:37.648917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.400 [2024-11-18 23:04:37.649004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:18.400 pt1 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.400 malloc2 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.400 [2024-11-18 23:04:37.687441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.400 [2024-11-18 23:04:37.687501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.400 [2024-11-18 23:04:37.687520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:18.400 [2024-11-18 23:04:37.687532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.400 [2024-11-18 23:04:37.689794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.400 [2024-11-18 23:04:37.689830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.400 pt2 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.400 malloc3 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.400 [2024-11-18 23:04:37.715871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:18.400 [2024-11-18 23:04:37.715977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.400 [2024-11-18 23:04:37.716010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:18.400 [2024-11-18 23:04:37.716040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.400 [2024-11-18 23:04:37.718059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.400 [2024-11-18 23:04:37.718126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:18.400 pt3 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.400 [2024-11-18 23:04:37.727894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:18.400 [2024-11-18 23:04:37.729717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.400 [2024-11-18 23:04:37.729830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:18.400 [2024-11-18 23:04:37.730009] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:18.400 [2024-11-18 23:04:37.730053] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:18.400 [2024-11-18 23:04:37.730322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:18.400 [2024-11-18 23:04:37.730490] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:18.400 [2024-11-18 23:04:37.730537] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:18.400 [2024-11-18 23:04:37.730681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.400 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.659 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.659 "name": "raid_bdev1", 00:09:18.659 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:18.659 "strip_size_kb": 0, 00:09:18.659 "state": "online", 00:09:18.659 "raid_level": "raid1", 00:09:18.659 "superblock": true, 00:09:18.659 "num_base_bdevs": 3, 00:09:18.659 "num_base_bdevs_discovered": 3, 00:09:18.659 "num_base_bdevs_operational": 3, 00:09:18.659 "base_bdevs_list": [ 00:09:18.659 { 00:09:18.659 "name": "pt1", 00:09:18.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.659 "is_configured": true, 00:09:18.659 "data_offset": 2048, 00:09:18.659 "data_size": 63488 00:09:18.659 }, 00:09:18.659 { 00:09:18.659 "name": "pt2", 00:09:18.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.659 "is_configured": true, 00:09:18.659 "data_offset": 2048, 00:09:18.659 "data_size": 63488 00:09:18.659 }, 00:09:18.659 { 00:09:18.659 "name": "pt3", 00:09:18.659 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.659 "is_configured": true, 00:09:18.659 "data_offset": 2048, 00:09:18.659 "data_size": 63488 00:09:18.659 } 00:09:18.659 ] 00:09:18.659 }' 00:09:18.659 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.659 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.919 [2024-11-18 23:04:38.187418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.919 "name": "raid_bdev1", 00:09:18.919 "aliases": [ 00:09:18.919 "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b" 00:09:18.919 ], 00:09:18.919 "product_name": "Raid Volume", 00:09:18.919 "block_size": 512, 00:09:18.919 "num_blocks": 63488, 00:09:18.919 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:18.919 "assigned_rate_limits": { 00:09:18.919 "rw_ios_per_sec": 0, 00:09:18.919 "rw_mbytes_per_sec": 0, 00:09:18.919 "r_mbytes_per_sec": 0, 00:09:18.919 "w_mbytes_per_sec": 0 00:09:18.919 }, 00:09:18.919 "claimed": false, 00:09:18.919 "zoned": false, 00:09:18.919 "supported_io_types": { 00:09:18.919 "read": true, 00:09:18.919 "write": true, 00:09:18.919 "unmap": false, 00:09:18.919 "flush": false, 00:09:18.919 "reset": true, 00:09:18.919 "nvme_admin": false, 00:09:18.919 "nvme_io": false, 00:09:18.919 "nvme_io_md": false, 00:09:18.919 "write_zeroes": true, 00:09:18.919 "zcopy": false, 00:09:18.919 "get_zone_info": false, 00:09:18.919 "zone_management": false, 00:09:18.919 "zone_append": false, 00:09:18.919 "compare": false, 00:09:18.919 "compare_and_write": false, 00:09:18.919 "abort": false, 00:09:18.919 "seek_hole": false, 00:09:18.919 "seek_data": false, 00:09:18.919 "copy": false, 00:09:18.919 "nvme_iov_md": false 00:09:18.919 }, 00:09:18.919 "memory_domains": [ 00:09:18.919 { 00:09:18.919 "dma_device_id": "system", 00:09:18.919 "dma_device_type": 1 00:09:18.919 }, 00:09:18.919 { 00:09:18.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.919 "dma_device_type": 2 00:09:18.919 }, 00:09:18.919 { 00:09:18.919 "dma_device_id": "system", 00:09:18.919 "dma_device_type": 1 00:09:18.919 }, 00:09:18.919 { 00:09:18.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.919 "dma_device_type": 2 00:09:18.919 }, 00:09:18.919 { 00:09:18.919 "dma_device_id": "system", 00:09:18.919 "dma_device_type": 1 00:09:18.919 }, 00:09:18.919 { 00:09:18.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.919 "dma_device_type": 2 00:09:18.919 } 00:09:18.919 ], 00:09:18.919 "driver_specific": { 00:09:18.919 "raid": { 00:09:18.919 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:18.919 "strip_size_kb": 0, 00:09:18.919 "state": "online", 00:09:18.919 "raid_level": "raid1", 00:09:18.919 "superblock": true, 00:09:18.919 "num_base_bdevs": 3, 00:09:18.919 "num_base_bdevs_discovered": 3, 00:09:18.919 "num_base_bdevs_operational": 3, 00:09:18.919 "base_bdevs_list": [ 00:09:18.919 { 00:09:18.919 "name": "pt1", 00:09:18.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.919 "is_configured": true, 00:09:18.919 "data_offset": 2048, 00:09:18.919 "data_size": 63488 00:09:18.919 }, 00:09:18.919 { 00:09:18.919 "name": "pt2", 00:09:18.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.919 "is_configured": true, 00:09:18.919 "data_offset": 2048, 00:09:18.919 "data_size": 63488 00:09:18.919 }, 00:09:18.919 { 00:09:18.919 "name": "pt3", 00:09:18.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.919 "is_configured": true, 00:09:18.919 "data_offset": 2048, 00:09:18.919 "data_size": 63488 00:09:18.919 } 00:09:18.919 ] 00:09:18.919 } 00:09:18.919 } 00:09:18.919 }' 00:09:18.919 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.920 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.920 pt2 00:09:18.920 pt3' 00:09:18.920 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:19.180 [2024-11-18 23:04:38.458859] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b ']' 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.180 [2024-11-18 23:04:38.506527] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.180 [2024-11-18 23:04:38.506596] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.180 [2024-11-18 23:04:38.506686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.180 [2024-11-18 23:04:38.506762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.180 [2024-11-18 23:04:38.506775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.180 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.440 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:19.440 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:19.440 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.440 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:19.440 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.440 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.440 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.440 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.441 [2024-11-18 23:04:38.662323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:19.441 [2024-11-18 23:04:38.664155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:19.441 [2024-11-18 23:04:38.664200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:19.441 [2024-11-18 23:04:38.664247] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:19.441 [2024-11-18 23:04:38.664313] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:19.441 [2024-11-18 23:04:38.664336] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:19.441 [2024-11-18 23:04:38.664348] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.441 [2024-11-18 23:04:38.664358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:19.441 request: 00:09:19.441 { 00:09:19.441 "name": "raid_bdev1", 00:09:19.441 "raid_level": "raid1", 00:09:19.441 "base_bdevs": [ 00:09:19.441 "malloc1", 00:09:19.441 "malloc2", 00:09:19.441 "malloc3" 00:09:19.441 ], 00:09:19.441 "superblock": false, 00:09:19.441 "method": "bdev_raid_create", 00:09:19.441 "req_id": 1 00:09:19.441 } 00:09:19.441 Got JSON-RPC error response 00:09:19.441 response: 00:09:19.441 { 00:09:19.441 "code": -17, 00:09:19.441 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:19.441 } 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.441 [2024-11-18 23:04:38.714163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:19.441 [2024-11-18 23:04:38.714258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.441 [2024-11-18 23:04:38.714308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:19.441 [2024-11-18 23:04:38.714357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.441 [2024-11-18 23:04:38.716502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.441 [2024-11-18 23:04:38.716571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:19.441 [2024-11-18 23:04:38.716658] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:19.441 [2024-11-18 23:04:38.716724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:19.441 pt1 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.441 "name": "raid_bdev1", 00:09:19.441 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:19.441 "strip_size_kb": 0, 00:09:19.441 "state": "configuring", 00:09:19.441 "raid_level": "raid1", 00:09:19.441 "superblock": true, 00:09:19.441 "num_base_bdevs": 3, 00:09:19.441 "num_base_bdevs_discovered": 1, 00:09:19.441 "num_base_bdevs_operational": 3, 00:09:19.441 "base_bdevs_list": [ 00:09:19.441 { 00:09:19.441 "name": "pt1", 00:09:19.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.441 "is_configured": true, 00:09:19.441 "data_offset": 2048, 00:09:19.441 "data_size": 63488 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "name": null, 00:09:19.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.441 "is_configured": false, 00:09:19.441 "data_offset": 2048, 00:09:19.441 "data_size": 63488 00:09:19.441 }, 00:09:19.441 { 00:09:19.441 "name": null, 00:09:19.441 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.441 "is_configured": false, 00:09:19.441 "data_offset": 2048, 00:09:19.441 "data_size": 63488 00:09:19.441 } 00:09:19.441 ] 00:09:19.441 }' 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.441 23:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.015 [2024-11-18 23:04:39.181414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.015 [2024-11-18 23:04:39.181472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.015 [2024-11-18 23:04:39.181491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:20.015 [2024-11-18 23:04:39.181504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.015 [2024-11-18 23:04:39.181870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.015 [2024-11-18 23:04:39.181890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.015 [2024-11-18 23:04:39.181957] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:20.015 [2024-11-18 23:04:39.181978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.015 pt2 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.015 [2024-11-18 23:04:39.193398] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.015 "name": "raid_bdev1", 00:09:20.015 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:20.015 "strip_size_kb": 0, 00:09:20.015 "state": "configuring", 00:09:20.015 "raid_level": "raid1", 00:09:20.015 "superblock": true, 00:09:20.015 "num_base_bdevs": 3, 00:09:20.015 "num_base_bdevs_discovered": 1, 00:09:20.015 "num_base_bdevs_operational": 3, 00:09:20.015 "base_bdevs_list": [ 00:09:20.015 { 00:09:20.015 "name": "pt1", 00:09:20.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.015 "is_configured": true, 00:09:20.015 "data_offset": 2048, 00:09:20.015 "data_size": 63488 00:09:20.015 }, 00:09:20.015 { 00:09:20.015 "name": null, 00:09:20.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.015 "is_configured": false, 00:09:20.015 "data_offset": 0, 00:09:20.015 "data_size": 63488 00:09:20.015 }, 00:09:20.015 { 00:09:20.015 "name": null, 00:09:20.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.015 "is_configured": false, 00:09:20.015 "data_offset": 2048, 00:09:20.015 "data_size": 63488 00:09:20.015 } 00:09:20.015 ] 00:09:20.015 }' 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.015 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.584 [2024-11-18 23:04:39.668534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.584 [2024-11-18 23:04:39.668632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.584 [2024-11-18 23:04:39.668667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:20.584 [2024-11-18 23:04:39.668691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.584 [2024-11-18 23:04:39.669076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.584 [2024-11-18 23:04:39.669131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.584 [2024-11-18 23:04:39.669219] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:20.584 [2024-11-18 23:04:39.669273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.584 pt2 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.584 [2024-11-18 23:04:39.680494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:20.584 [2024-11-18 23:04:39.680571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.584 [2024-11-18 23:04:39.680603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:20.584 [2024-11-18 23:04:39.680626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.584 [2024-11-18 23:04:39.680969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.584 [2024-11-18 23:04:39.681028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:20.584 [2024-11-18 23:04:39.681112] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:20.584 [2024-11-18 23:04:39.681157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:20.584 [2024-11-18 23:04:39.681306] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:20.584 [2024-11-18 23:04:39.681349] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:20.584 [2024-11-18 23:04:39.681599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:20.584 [2024-11-18 23:04:39.681753] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:20.584 [2024-11-18 23:04:39.681795] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:20.584 [2024-11-18 23:04:39.681925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.584 pt3 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.584 "name": "raid_bdev1", 00:09:20.584 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:20.584 "strip_size_kb": 0, 00:09:20.584 "state": "online", 00:09:20.584 "raid_level": "raid1", 00:09:20.584 "superblock": true, 00:09:20.584 "num_base_bdevs": 3, 00:09:20.584 "num_base_bdevs_discovered": 3, 00:09:20.584 "num_base_bdevs_operational": 3, 00:09:20.584 "base_bdevs_list": [ 00:09:20.584 { 00:09:20.584 "name": "pt1", 00:09:20.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.584 "is_configured": true, 00:09:20.584 "data_offset": 2048, 00:09:20.584 "data_size": 63488 00:09:20.584 }, 00:09:20.584 { 00:09:20.584 "name": "pt2", 00:09:20.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.584 "is_configured": true, 00:09:20.584 "data_offset": 2048, 00:09:20.584 "data_size": 63488 00:09:20.584 }, 00:09:20.584 { 00:09:20.584 "name": "pt3", 00:09:20.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.584 "is_configured": true, 00:09:20.584 "data_offset": 2048, 00:09:20.584 "data_size": 63488 00:09:20.584 } 00:09:20.584 ] 00:09:20.584 }' 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.584 23:04:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.843 [2024-11-18 23:04:40.112075] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.843 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.843 "name": "raid_bdev1", 00:09:20.843 "aliases": [ 00:09:20.843 "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b" 00:09:20.843 ], 00:09:20.843 "product_name": "Raid Volume", 00:09:20.843 "block_size": 512, 00:09:20.843 "num_blocks": 63488, 00:09:20.843 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:20.843 "assigned_rate_limits": { 00:09:20.843 "rw_ios_per_sec": 0, 00:09:20.843 "rw_mbytes_per_sec": 0, 00:09:20.843 "r_mbytes_per_sec": 0, 00:09:20.843 "w_mbytes_per_sec": 0 00:09:20.843 }, 00:09:20.843 "claimed": false, 00:09:20.843 "zoned": false, 00:09:20.844 "supported_io_types": { 00:09:20.844 "read": true, 00:09:20.844 "write": true, 00:09:20.844 "unmap": false, 00:09:20.844 "flush": false, 00:09:20.844 "reset": true, 00:09:20.844 "nvme_admin": false, 00:09:20.844 "nvme_io": false, 00:09:20.844 "nvme_io_md": false, 00:09:20.844 "write_zeroes": true, 00:09:20.844 "zcopy": false, 00:09:20.844 "get_zone_info": false, 00:09:20.844 "zone_management": false, 00:09:20.844 "zone_append": false, 00:09:20.844 "compare": false, 00:09:20.844 "compare_and_write": false, 00:09:20.844 "abort": false, 00:09:20.844 "seek_hole": false, 00:09:20.844 "seek_data": false, 00:09:20.844 "copy": false, 00:09:20.844 "nvme_iov_md": false 00:09:20.844 }, 00:09:20.844 "memory_domains": [ 00:09:20.844 { 00:09:20.844 "dma_device_id": "system", 00:09:20.844 "dma_device_type": 1 00:09:20.844 }, 00:09:20.844 { 00:09:20.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.844 "dma_device_type": 2 00:09:20.844 }, 00:09:20.844 { 00:09:20.844 "dma_device_id": "system", 00:09:20.844 "dma_device_type": 1 00:09:20.844 }, 00:09:20.844 { 00:09:20.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.844 "dma_device_type": 2 00:09:20.844 }, 00:09:20.844 { 00:09:20.844 "dma_device_id": "system", 00:09:20.844 "dma_device_type": 1 00:09:20.844 }, 00:09:20.844 { 00:09:20.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.844 "dma_device_type": 2 00:09:20.844 } 00:09:20.844 ], 00:09:20.844 "driver_specific": { 00:09:20.844 "raid": { 00:09:20.844 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:20.844 "strip_size_kb": 0, 00:09:20.844 "state": "online", 00:09:20.844 "raid_level": "raid1", 00:09:20.844 "superblock": true, 00:09:20.844 "num_base_bdevs": 3, 00:09:20.844 "num_base_bdevs_discovered": 3, 00:09:20.844 "num_base_bdevs_operational": 3, 00:09:20.844 "base_bdevs_list": [ 00:09:20.844 { 00:09:20.844 "name": "pt1", 00:09:20.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.844 "is_configured": true, 00:09:20.844 "data_offset": 2048, 00:09:20.844 "data_size": 63488 00:09:20.844 }, 00:09:20.844 { 00:09:20.844 "name": "pt2", 00:09:20.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.844 "is_configured": true, 00:09:20.844 "data_offset": 2048, 00:09:20.844 "data_size": 63488 00:09:20.844 }, 00:09:20.844 { 00:09:20.844 "name": "pt3", 00:09:20.844 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.844 "is_configured": true, 00:09:20.844 "data_offset": 2048, 00:09:20.844 "data_size": 63488 00:09:20.844 } 00:09:20.844 ] 00:09:20.844 } 00:09:20.844 } 00:09:20.844 }' 00:09:20.844 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.844 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:20.844 pt2 00:09:20.844 pt3' 00:09:20.844 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.104 [2024-11-18 23:04:40.391576] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b '!=' c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b ']' 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.104 [2024-11-18 23:04:40.423359] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.104 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.105 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.365 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.365 "name": "raid_bdev1", 00:09:21.365 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:21.365 "strip_size_kb": 0, 00:09:21.365 "state": "online", 00:09:21.365 "raid_level": "raid1", 00:09:21.365 "superblock": true, 00:09:21.365 "num_base_bdevs": 3, 00:09:21.365 "num_base_bdevs_discovered": 2, 00:09:21.365 "num_base_bdevs_operational": 2, 00:09:21.365 "base_bdevs_list": [ 00:09:21.365 { 00:09:21.365 "name": null, 00:09:21.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.365 "is_configured": false, 00:09:21.365 "data_offset": 0, 00:09:21.365 "data_size": 63488 00:09:21.365 }, 00:09:21.365 { 00:09:21.365 "name": "pt2", 00:09:21.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.365 "is_configured": true, 00:09:21.365 "data_offset": 2048, 00:09:21.365 "data_size": 63488 00:09:21.365 }, 00:09:21.365 { 00:09:21.365 "name": "pt3", 00:09:21.365 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.365 "is_configured": true, 00:09:21.365 "data_offset": 2048, 00:09:21.365 "data_size": 63488 00:09:21.365 } 00:09:21.365 ] 00:09:21.365 }' 00:09:21.365 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.365 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 [2024-11-18 23:04:40.862511] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:21.625 [2024-11-18 23:04:40.862586] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.625 [2024-11-18 23:04:40.862650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.625 [2024-11-18 23:04:40.862718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.625 [2024-11-18 23:04:40.862728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 [2024-11-18 23:04:40.946376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:21.625 [2024-11-18 23:04:40.946420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.625 [2024-11-18 23:04:40.946453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:21.625 [2024-11-18 23:04:40.946462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.625 [2024-11-18 23:04:40.948516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.625 [2024-11-18 23:04:40.948602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:21.625 [2024-11-18 23:04:40.948690] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:21.625 [2024-11-18 23:04:40.948720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:21.625 pt2 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.625 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.625 "name": "raid_bdev1", 00:09:21.625 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:21.625 "strip_size_kb": 0, 00:09:21.625 "state": "configuring", 00:09:21.625 "raid_level": "raid1", 00:09:21.625 "superblock": true, 00:09:21.625 "num_base_bdevs": 3, 00:09:21.625 "num_base_bdevs_discovered": 1, 00:09:21.625 "num_base_bdevs_operational": 2, 00:09:21.625 "base_bdevs_list": [ 00:09:21.625 { 00:09:21.625 "name": null, 00:09:21.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.625 "is_configured": false, 00:09:21.625 "data_offset": 2048, 00:09:21.625 "data_size": 63488 00:09:21.625 }, 00:09:21.625 { 00:09:21.625 "name": "pt2", 00:09:21.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.625 "is_configured": true, 00:09:21.625 "data_offset": 2048, 00:09:21.625 "data_size": 63488 00:09:21.625 }, 00:09:21.625 { 00:09:21.625 "name": null, 00:09:21.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.626 "is_configured": false, 00:09:21.626 "data_offset": 2048, 00:09:21.626 "data_size": 63488 00:09:21.626 } 00:09:21.626 ] 00:09:21.626 }' 00:09:21.626 23:04:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.626 23:04:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.201 [2024-11-18 23:04:41.353706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:22.201 [2024-11-18 23:04:41.353816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.201 [2024-11-18 23:04:41.353853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:22.201 [2024-11-18 23:04:41.353885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.201 [2024-11-18 23:04:41.354265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.201 [2024-11-18 23:04:41.354339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:22.201 [2024-11-18 23:04:41.354439] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:22.201 [2024-11-18 23:04:41.354485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:22.201 [2024-11-18 23:04:41.354597] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:22.201 [2024-11-18 23:04:41.354631] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:22.201 [2024-11-18 23:04:41.354889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:22.201 [2024-11-18 23:04:41.355039] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:22.201 [2024-11-18 23:04:41.355077] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:22.201 [2024-11-18 23:04:41.355244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.201 pt3 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.201 "name": "raid_bdev1", 00:09:22.201 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:22.201 "strip_size_kb": 0, 00:09:22.201 "state": "online", 00:09:22.201 "raid_level": "raid1", 00:09:22.201 "superblock": true, 00:09:22.201 "num_base_bdevs": 3, 00:09:22.201 "num_base_bdevs_discovered": 2, 00:09:22.201 "num_base_bdevs_operational": 2, 00:09:22.201 "base_bdevs_list": [ 00:09:22.201 { 00:09:22.201 "name": null, 00:09:22.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.201 "is_configured": false, 00:09:22.201 "data_offset": 2048, 00:09:22.201 "data_size": 63488 00:09:22.201 }, 00:09:22.201 { 00:09:22.201 "name": "pt2", 00:09:22.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.201 "is_configured": true, 00:09:22.201 "data_offset": 2048, 00:09:22.201 "data_size": 63488 00:09:22.201 }, 00:09:22.201 { 00:09:22.201 "name": "pt3", 00:09:22.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.201 "is_configured": true, 00:09:22.201 "data_offset": 2048, 00:09:22.201 "data_size": 63488 00:09:22.201 } 00:09:22.201 ] 00:09:22.201 }' 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.201 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.461 [2024-11-18 23:04:41.796904] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.461 [2024-11-18 23:04:41.796931] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.461 [2024-11-18 23:04:41.796990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.461 [2024-11-18 23:04:41.797041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.461 [2024-11-18 23:04:41.797051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.461 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.721 [2024-11-18 23:04:41.848806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:22.721 [2024-11-18 23:04:41.848860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.721 [2024-11-18 23:04:41.848892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:22.721 [2024-11-18 23:04:41.848901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.721 [2024-11-18 23:04:41.850942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.721 [2024-11-18 23:04:41.851019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:22.721 [2024-11-18 23:04:41.851087] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:22.721 [2024-11-18 23:04:41.851126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:22.721 [2024-11-18 23:04:41.851269] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:22.721 [2024-11-18 23:04:41.851288] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.721 [2024-11-18 23:04:41.851303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:09:22.721 [2024-11-18 23:04:41.851357] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:22.721 pt1 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.721 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.721 "name": "raid_bdev1", 00:09:22.721 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:22.721 "strip_size_kb": 0, 00:09:22.721 "state": "configuring", 00:09:22.721 "raid_level": "raid1", 00:09:22.721 "superblock": true, 00:09:22.721 "num_base_bdevs": 3, 00:09:22.721 "num_base_bdevs_discovered": 1, 00:09:22.721 "num_base_bdevs_operational": 2, 00:09:22.721 "base_bdevs_list": [ 00:09:22.721 { 00:09:22.721 "name": null, 00:09:22.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.721 "is_configured": false, 00:09:22.721 "data_offset": 2048, 00:09:22.721 "data_size": 63488 00:09:22.721 }, 00:09:22.721 { 00:09:22.721 "name": "pt2", 00:09:22.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.721 "is_configured": true, 00:09:22.721 "data_offset": 2048, 00:09:22.721 "data_size": 63488 00:09:22.721 }, 00:09:22.721 { 00:09:22.721 "name": null, 00:09:22.721 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.722 "is_configured": false, 00:09:22.722 "data_offset": 2048, 00:09:22.722 "data_size": 63488 00:09:22.722 } 00:09:22.722 ] 00:09:22.722 }' 00:09:22.722 23:04:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.722 23:04:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.981 [2024-11-18 23:04:42.339973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:22.981 [2024-11-18 23:04:42.340075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.981 [2024-11-18 23:04:42.340110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:22.981 [2024-11-18 23:04:42.340137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.981 [2024-11-18 23:04:42.340562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.981 [2024-11-18 23:04:42.340624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:22.981 [2024-11-18 23:04:42.340720] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:22.981 [2024-11-18 23:04:42.340795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:22.981 [2024-11-18 23:04:42.340926] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:22.981 [2024-11-18 23:04:42.340965] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:22.981 [2024-11-18 23:04:42.341199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:22.981 [2024-11-18 23:04:42.341373] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:22.981 [2024-11-18 23:04:42.341416] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:22.981 [2024-11-18 23:04:42.341560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.981 pt3 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.981 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.240 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.240 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.240 "name": "raid_bdev1", 00:09:23.240 "uuid": "c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b", 00:09:23.240 "strip_size_kb": 0, 00:09:23.240 "state": "online", 00:09:23.240 "raid_level": "raid1", 00:09:23.240 "superblock": true, 00:09:23.240 "num_base_bdevs": 3, 00:09:23.240 "num_base_bdevs_discovered": 2, 00:09:23.240 "num_base_bdevs_operational": 2, 00:09:23.240 "base_bdevs_list": [ 00:09:23.240 { 00:09:23.240 "name": null, 00:09:23.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.240 "is_configured": false, 00:09:23.240 "data_offset": 2048, 00:09:23.240 "data_size": 63488 00:09:23.240 }, 00:09:23.240 { 00:09:23.240 "name": "pt2", 00:09:23.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.240 "is_configured": true, 00:09:23.240 "data_offset": 2048, 00:09:23.240 "data_size": 63488 00:09:23.240 }, 00:09:23.240 { 00:09:23.240 "name": "pt3", 00:09:23.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.240 "is_configured": true, 00:09:23.240 "data_offset": 2048, 00:09:23.240 "data_size": 63488 00:09:23.240 } 00:09:23.240 ] 00:09:23.240 }' 00:09:23.240 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.240 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.499 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:23.499 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:23.499 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.499 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.499 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.499 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:23.499 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:23.499 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.499 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.499 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.499 [2024-11-18 23:04:42.847382] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.499 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b '!=' c11be8d1-e3b5-4c3e-9c03-d6b757cdbb7b ']' 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79622 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79622 ']' 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79622 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79622 00:09:23.759 killing process with pid 79622 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79622' 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79622 00:09:23.759 [2024-11-18 23:04:42.906816] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.759 [2024-11-18 23:04:42.906892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.759 [2024-11-18 23:04:42.906947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.759 [2024-11-18 23:04:42.906956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:23.759 23:04:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79622 00:09:23.759 [2024-11-18 23:04:42.939688] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.019 23:04:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:24.019 00:09:24.019 real 0m6.445s 00:09:24.019 user 0m10.872s 00:09:24.019 sys 0m1.254s 00:09:24.019 23:04:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.019 23:04:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.019 ************************************ 00:09:24.019 END TEST raid_superblock_test 00:09:24.019 ************************************ 00:09:24.019 23:04:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:24.019 23:04:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:24.019 23:04:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.019 23:04:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.019 ************************************ 00:09:24.019 START TEST raid_read_error_test 00:09:24.019 ************************************ 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GIShxy6jss 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80057 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80057 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80057 ']' 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.019 23:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.019 [2024-11-18 23:04:43.352726] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:24.019 [2024-11-18 23:04:43.352913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80057 ] 00:09:24.280 [2024-11-18 23:04:43.510988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.280 [2024-11-18 23:04:43.555289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.280 [2024-11-18 23:04:43.597042] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.280 [2024-11-18 23:04:43.597154] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.850 BaseBdev1_malloc 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.850 true 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.850 [2024-11-18 23:04:44.191019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:24.850 [2024-11-18 23:04:44.191072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.850 [2024-11-18 23:04:44.191091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:24.850 [2024-11-18 23:04:44.191100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.850 [2024-11-18 23:04:44.193256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.850 [2024-11-18 23:04:44.193340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:24.850 BaseBdev1 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.850 BaseBdev2_malloc 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.850 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.111 true 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.111 [2024-11-18 23:04:44.241624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:25.111 [2024-11-18 23:04:44.241695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.111 [2024-11-18 23:04:44.241719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:25.111 [2024-11-18 23:04:44.241729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.111 [2024-11-18 23:04:44.244313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.111 [2024-11-18 23:04:44.244353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:25.111 BaseBdev2 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.111 BaseBdev3_malloc 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.111 true 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.111 [2024-11-18 23:04:44.281958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:25.111 [2024-11-18 23:04:44.282005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.111 [2024-11-18 23:04:44.282024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:25.111 [2024-11-18 23:04:44.282033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.111 [2024-11-18 23:04:44.284202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.111 [2024-11-18 23:04:44.284239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:25.111 BaseBdev3 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.111 [2024-11-18 23:04:44.293987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.111 [2024-11-18 23:04:44.295927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.111 [2024-11-18 23:04:44.296008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.111 [2024-11-18 23:04:44.296183] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:25.111 [2024-11-18 23:04:44.296200] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:25.111 [2024-11-18 23:04:44.296456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:25.111 [2024-11-18 23:04:44.296599] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:25.111 [2024-11-18 23:04:44.296620] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:25.111 [2024-11-18 23:04:44.296746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.111 "name": "raid_bdev1", 00:09:25.111 "uuid": "1e76266f-5498-4b60-a8e3-08761525ffcd", 00:09:25.111 "strip_size_kb": 0, 00:09:25.111 "state": "online", 00:09:25.111 "raid_level": "raid1", 00:09:25.111 "superblock": true, 00:09:25.111 "num_base_bdevs": 3, 00:09:25.111 "num_base_bdevs_discovered": 3, 00:09:25.111 "num_base_bdevs_operational": 3, 00:09:25.111 "base_bdevs_list": [ 00:09:25.111 { 00:09:25.111 "name": "BaseBdev1", 00:09:25.111 "uuid": "ce823d3f-3662-5e19-8fc1-66bd5b75f45d", 00:09:25.111 "is_configured": true, 00:09:25.111 "data_offset": 2048, 00:09:25.111 "data_size": 63488 00:09:25.111 }, 00:09:25.111 { 00:09:25.111 "name": "BaseBdev2", 00:09:25.111 "uuid": "a6134838-1020-5262-b7ab-9401eecf01b7", 00:09:25.111 "is_configured": true, 00:09:25.111 "data_offset": 2048, 00:09:25.111 "data_size": 63488 00:09:25.111 }, 00:09:25.111 { 00:09:25.111 "name": "BaseBdev3", 00:09:25.111 "uuid": "85e9b6c6-f49b-5c43-b4d1-27bf62cb0560", 00:09:25.111 "is_configured": true, 00:09:25.111 "data_offset": 2048, 00:09:25.111 "data_size": 63488 00:09:25.111 } 00:09:25.111 ] 00:09:25.111 }' 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.111 23:04:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.371 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:25.371 23:04:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:25.629 [2024-11-18 23:04:44.785483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.569 "name": "raid_bdev1", 00:09:26.569 "uuid": "1e76266f-5498-4b60-a8e3-08761525ffcd", 00:09:26.569 "strip_size_kb": 0, 00:09:26.569 "state": "online", 00:09:26.569 "raid_level": "raid1", 00:09:26.569 "superblock": true, 00:09:26.569 "num_base_bdevs": 3, 00:09:26.569 "num_base_bdevs_discovered": 3, 00:09:26.569 "num_base_bdevs_operational": 3, 00:09:26.569 "base_bdevs_list": [ 00:09:26.569 { 00:09:26.569 "name": "BaseBdev1", 00:09:26.569 "uuid": "ce823d3f-3662-5e19-8fc1-66bd5b75f45d", 00:09:26.569 "is_configured": true, 00:09:26.569 "data_offset": 2048, 00:09:26.569 "data_size": 63488 00:09:26.569 }, 00:09:26.569 { 00:09:26.569 "name": "BaseBdev2", 00:09:26.569 "uuid": "a6134838-1020-5262-b7ab-9401eecf01b7", 00:09:26.569 "is_configured": true, 00:09:26.569 "data_offset": 2048, 00:09:26.569 "data_size": 63488 00:09:26.569 }, 00:09:26.569 { 00:09:26.569 "name": "BaseBdev3", 00:09:26.569 "uuid": "85e9b6c6-f49b-5c43-b4d1-27bf62cb0560", 00:09:26.569 "is_configured": true, 00:09:26.569 "data_offset": 2048, 00:09:26.569 "data_size": 63488 00:09:26.569 } 00:09:26.569 ] 00:09:26.569 }' 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.569 23:04:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 23:04:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:26.834 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.834 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 [2024-11-18 23:04:46.172356] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.834 [2024-11-18 23:04:46.172460] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.835 [2024-11-18 23:04:46.174869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.835 [2024-11-18 23:04:46.174925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.835 [2024-11-18 23:04:46.175017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.835 [2024-11-18 23:04:46.175029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:26.835 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.835 { 00:09:26.835 "results": [ 00:09:26.835 { 00:09:26.835 "job": "raid_bdev1", 00:09:26.835 "core_mask": "0x1", 00:09:26.835 "workload": "randrw", 00:09:26.835 "percentage": 50, 00:09:26.835 "status": "finished", 00:09:26.835 "queue_depth": 1, 00:09:26.835 "io_size": 131072, 00:09:26.835 "runtime": 1.387728, 00:09:26.835 "iops": 14977.718976629427, 00:09:26.835 "mibps": 1872.2148720786784, 00:09:26.835 "io_failed": 0, 00:09:26.835 "io_timeout": 0, 00:09:26.835 "avg_latency_us": 64.32990939678746, 00:09:26.835 "min_latency_us": 21.575545851528386, 00:09:26.835 "max_latency_us": 1438.071615720524 00:09:26.835 } 00:09:26.835 ], 00:09:26.835 "core_count": 1 00:09:26.835 } 00:09:26.835 23:04:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80057 00:09:26.835 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80057 ']' 00:09:26.835 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80057 00:09:26.835 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:26.835 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.835 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80057 00:09:27.110 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.110 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.110 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80057' 00:09:27.110 killing process with pid 80057 00:09:27.110 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80057 00:09:27.110 [2024-11-18 23:04:46.221472] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.110 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80057 00:09:27.110 [2024-11-18 23:04:46.246944] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.110 23:04:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:27.110 23:04:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GIShxy6jss 00:09:27.110 23:04:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:27.370 23:04:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:27.370 23:04:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:27.370 23:04:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.370 23:04:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:27.370 23:04:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:27.370 00:09:27.370 real 0m3.239s 00:09:27.370 user 0m4.071s 00:09:27.370 sys 0m0.530s 00:09:27.370 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.370 23:04:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.370 ************************************ 00:09:27.370 END TEST raid_read_error_test 00:09:27.370 ************************************ 00:09:27.370 23:04:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:27.370 23:04:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:27.370 23:04:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.370 23:04:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.370 ************************************ 00:09:27.370 START TEST raid_write_error_test 00:09:27.370 ************************************ 00:09:27.370 23:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:27.370 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.66d40G93SS 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80191 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80191 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80191 ']' 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.371 23:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.371 [2024-11-18 23:04:46.663004] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:27.371 [2024-11-18 23:04:46.663241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80191 ] 00:09:27.631 [2024-11-18 23:04:46.824870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.631 [2024-11-18 23:04:46.869692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.631 [2024-11-18 23:04:46.911647] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.631 [2024-11-18 23:04:46.911685] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.200 BaseBdev1_malloc 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.200 true 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.200 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.200 [2024-11-18 23:04:47.517692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:28.201 [2024-11-18 23:04:47.517809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.201 [2024-11-18 23:04:47.517848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:28.201 [2024-11-18 23:04:47.517876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.201 [2024-11-18 23:04:47.520015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.201 [2024-11-18 23:04:47.520086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:28.201 BaseBdev1 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.201 BaseBdev2_malloc 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.201 true 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.201 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.201 [2024-11-18 23:04:47.574612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:28.201 [2024-11-18 23:04:47.574683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.201 [2024-11-18 23:04:47.574711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:28.201 [2024-11-18 23:04:47.574725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.460 [2024-11-18 23:04:47.577974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.460 [2024-11-18 23:04:47.578012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:28.460 BaseBdev2 00:09:28.460 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.460 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.460 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:28.460 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.461 BaseBdev3_malloc 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.461 true 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.461 [2024-11-18 23:04:47.615475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:28.461 [2024-11-18 23:04:47.615523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.461 [2024-11-18 23:04:47.615542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:28.461 [2024-11-18 23:04:47.615552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.461 [2024-11-18 23:04:47.617593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.461 [2024-11-18 23:04:47.617629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:28.461 BaseBdev3 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.461 [2024-11-18 23:04:47.627520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.461 [2024-11-18 23:04:47.629380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.461 [2024-11-18 23:04:47.629464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.461 [2024-11-18 23:04:47.629638] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:28.461 [2024-11-18 23:04:47.629659] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:28.461 [2024-11-18 23:04:47.629891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:28.461 [2024-11-18 23:04:47.630051] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:28.461 [2024-11-18 23:04:47.630067] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:28.461 [2024-11-18 23:04:47.630200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.461 "name": "raid_bdev1", 00:09:28.461 "uuid": "dabd8b8a-44c7-4c1e-b95b-b4edf1e07d78", 00:09:28.461 "strip_size_kb": 0, 00:09:28.461 "state": "online", 00:09:28.461 "raid_level": "raid1", 00:09:28.461 "superblock": true, 00:09:28.461 "num_base_bdevs": 3, 00:09:28.461 "num_base_bdevs_discovered": 3, 00:09:28.461 "num_base_bdevs_operational": 3, 00:09:28.461 "base_bdevs_list": [ 00:09:28.461 { 00:09:28.461 "name": "BaseBdev1", 00:09:28.461 "uuid": "7452d70d-3c4d-5b0a-bda9-417c0455dedd", 00:09:28.461 "is_configured": true, 00:09:28.461 "data_offset": 2048, 00:09:28.461 "data_size": 63488 00:09:28.461 }, 00:09:28.461 { 00:09:28.461 "name": "BaseBdev2", 00:09:28.461 "uuid": "8747c69d-0b05-5166-ae07-514844b83b37", 00:09:28.461 "is_configured": true, 00:09:28.461 "data_offset": 2048, 00:09:28.461 "data_size": 63488 00:09:28.461 }, 00:09:28.461 { 00:09:28.461 "name": "BaseBdev3", 00:09:28.461 "uuid": "73f46f81-f9e5-5776-b318-a2b37d0ee5b4", 00:09:28.461 "is_configured": true, 00:09:28.461 "data_offset": 2048, 00:09:28.461 "data_size": 63488 00:09:28.461 } 00:09:28.461 ] 00:09:28.461 }' 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.461 23:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.721 23:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:28.721 23:04:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:28.980 [2024-11-18 23:04:48.158930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.933 [2024-11-18 23:04:49.081399] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:29.933 [2024-11-18 23:04:49.081569] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.933 [2024-11-18 23:04:49.081831] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.933 "name": "raid_bdev1", 00:09:29.933 "uuid": "dabd8b8a-44c7-4c1e-b95b-b4edf1e07d78", 00:09:29.933 "strip_size_kb": 0, 00:09:29.933 "state": "online", 00:09:29.933 "raid_level": "raid1", 00:09:29.933 "superblock": true, 00:09:29.933 "num_base_bdevs": 3, 00:09:29.933 "num_base_bdevs_discovered": 2, 00:09:29.933 "num_base_bdevs_operational": 2, 00:09:29.933 "base_bdevs_list": [ 00:09:29.933 { 00:09:29.933 "name": null, 00:09:29.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.933 "is_configured": false, 00:09:29.933 "data_offset": 0, 00:09:29.933 "data_size": 63488 00:09:29.933 }, 00:09:29.933 { 00:09:29.933 "name": "BaseBdev2", 00:09:29.933 "uuid": "8747c69d-0b05-5166-ae07-514844b83b37", 00:09:29.933 "is_configured": true, 00:09:29.933 "data_offset": 2048, 00:09:29.933 "data_size": 63488 00:09:29.933 }, 00:09:29.933 { 00:09:29.933 "name": "BaseBdev3", 00:09:29.933 "uuid": "73f46f81-f9e5-5776-b318-a2b37d0ee5b4", 00:09:29.933 "is_configured": true, 00:09:29.933 "data_offset": 2048, 00:09:29.933 "data_size": 63488 00:09:29.933 } 00:09:29.933 ] 00:09:29.933 }' 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.933 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.193 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.194 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.194 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.194 [2024-11-18 23:04:49.535598] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.194 [2024-11-18 23:04:49.535697] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.194 [2024-11-18 23:04:49.538217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.194 [2024-11-18 23:04:49.538333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.194 [2024-11-18 23:04:49.538454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.194 [2024-11-18 23:04:49.538502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:30.194 { 00:09:30.194 "results": [ 00:09:30.194 { 00:09:30.194 "job": "raid_bdev1", 00:09:30.194 "core_mask": "0x1", 00:09:30.194 "workload": "randrw", 00:09:30.194 "percentage": 50, 00:09:30.194 "status": "finished", 00:09:30.194 "queue_depth": 1, 00:09:30.194 "io_size": 131072, 00:09:30.194 "runtime": 1.37758, 00:09:30.194 "iops": 16744.581076961047, 00:09:30.194 "mibps": 2093.072634620131, 00:09:30.194 "io_failed": 0, 00:09:30.194 "io_timeout": 0, 00:09:30.194 "avg_latency_us": 57.26060999825267, 00:09:30.194 "min_latency_us": 21.687336244541484, 00:09:30.194 "max_latency_us": 1395.1441048034935 00:09:30.194 } 00:09:30.194 ], 00:09:30.194 "core_count": 1 00:09:30.194 } 00:09:30.194 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.194 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80191 00:09:30.194 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80191 ']' 00:09:30.194 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80191 00:09:30.194 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:30.194 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.194 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80191 00:09:30.454 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.454 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.454 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80191' 00:09:30.454 killing process with pid 80191 00:09:30.454 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80191 00:09:30.454 [2024-11-18 23:04:49.572545] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.454 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80191 00:09:30.454 [2024-11-18 23:04:49.598219] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.714 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:30.714 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.66d40G93SS 00:09:30.714 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:30.714 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:30.714 ************************************ 00:09:30.714 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:30.714 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.714 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:30.714 23:04:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:30.714 00:09:30.714 real 0m3.281s 00:09:30.714 user 0m4.139s 00:09:30.714 sys 0m0.513s 00:09:30.714 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.714 23:04:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.714 END TEST raid_write_error_test 00:09:30.714 ************************************ 00:09:30.714 23:04:49 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:30.714 23:04:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:30.714 23:04:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:30.714 23:04:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:30.714 23:04:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.714 23:04:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.714 ************************************ 00:09:30.714 START TEST raid_state_function_test 00:09:30.714 ************************************ 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80318 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80318' 00:09:30.714 Process raid pid: 80318 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80318 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80318 ']' 00:09:30.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.714 23:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.714 [2024-11-18 23:04:50.008436] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:30.714 [2024-11-18 23:04:50.008663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.975 [2024-11-18 23:04:50.169349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.975 [2024-11-18 23:04:50.213733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.975 [2024-11-18 23:04:50.255315] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.975 [2024-11-18 23:04:50.255350] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.544 [2024-11-18 23:04:50.836549] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.544 [2024-11-18 23:04:50.836662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.544 [2024-11-18 23:04:50.836680] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.544 [2024-11-18 23:04:50.836690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.544 [2024-11-18 23:04:50.836696] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.544 [2024-11-18 23:04:50.836708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.544 [2024-11-18 23:04:50.836714] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:31.544 [2024-11-18 23:04:50.836722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.544 "name": "Existed_Raid", 00:09:31.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.544 "strip_size_kb": 64, 00:09:31.544 "state": "configuring", 00:09:31.544 "raid_level": "raid0", 00:09:31.544 "superblock": false, 00:09:31.544 "num_base_bdevs": 4, 00:09:31.544 "num_base_bdevs_discovered": 0, 00:09:31.544 "num_base_bdevs_operational": 4, 00:09:31.544 "base_bdevs_list": [ 00:09:31.544 { 00:09:31.544 "name": "BaseBdev1", 00:09:31.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.544 "is_configured": false, 00:09:31.544 "data_offset": 0, 00:09:31.544 "data_size": 0 00:09:31.544 }, 00:09:31.544 { 00:09:31.544 "name": "BaseBdev2", 00:09:31.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.544 "is_configured": false, 00:09:31.544 "data_offset": 0, 00:09:31.544 "data_size": 0 00:09:31.544 }, 00:09:31.544 { 00:09:31.544 "name": "BaseBdev3", 00:09:31.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.544 "is_configured": false, 00:09:31.544 "data_offset": 0, 00:09:31.544 "data_size": 0 00:09:31.544 }, 00:09:31.544 { 00:09:31.544 "name": "BaseBdev4", 00:09:31.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.544 "is_configured": false, 00:09:31.544 "data_offset": 0, 00:09:31.544 "data_size": 0 00:09:31.544 } 00:09:31.544 ] 00:09:31.544 }' 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.544 23:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 [2024-11-18 23:04:51.271678] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.112 [2024-11-18 23:04:51.271760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 [2024-11-18 23:04:51.283702] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.112 [2024-11-18 23:04:51.283776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.112 [2024-11-18 23:04:51.283803] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.112 [2024-11-18 23:04:51.283825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.112 [2024-11-18 23:04:51.283842] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.112 [2024-11-18 23:04:51.283862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.112 [2024-11-18 23:04:51.283879] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:32.112 [2024-11-18 23:04:51.283899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 [2024-11-18 23:04:51.304343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.112 BaseBdev1 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 [ 00:09:32.112 { 00:09:32.112 "name": "BaseBdev1", 00:09:32.112 "aliases": [ 00:09:32.112 "b96b4cd5-f5cb-4291-befb-73397c47e64d" 00:09:32.112 ], 00:09:32.112 "product_name": "Malloc disk", 00:09:32.112 "block_size": 512, 00:09:32.112 "num_blocks": 65536, 00:09:32.112 "uuid": "b96b4cd5-f5cb-4291-befb-73397c47e64d", 00:09:32.112 "assigned_rate_limits": { 00:09:32.112 "rw_ios_per_sec": 0, 00:09:32.112 "rw_mbytes_per_sec": 0, 00:09:32.112 "r_mbytes_per_sec": 0, 00:09:32.112 "w_mbytes_per_sec": 0 00:09:32.112 }, 00:09:32.112 "claimed": true, 00:09:32.112 "claim_type": "exclusive_write", 00:09:32.112 "zoned": false, 00:09:32.112 "supported_io_types": { 00:09:32.112 "read": true, 00:09:32.112 "write": true, 00:09:32.112 "unmap": true, 00:09:32.112 "flush": true, 00:09:32.112 "reset": true, 00:09:32.112 "nvme_admin": false, 00:09:32.112 "nvme_io": false, 00:09:32.112 "nvme_io_md": false, 00:09:32.112 "write_zeroes": true, 00:09:32.112 "zcopy": true, 00:09:32.112 "get_zone_info": false, 00:09:32.112 "zone_management": false, 00:09:32.112 "zone_append": false, 00:09:32.112 "compare": false, 00:09:32.112 "compare_and_write": false, 00:09:32.112 "abort": true, 00:09:32.112 "seek_hole": false, 00:09:32.112 "seek_data": false, 00:09:32.112 "copy": true, 00:09:32.112 "nvme_iov_md": false 00:09:32.112 }, 00:09:32.112 "memory_domains": [ 00:09:32.112 { 00:09:32.112 "dma_device_id": "system", 00:09:32.112 "dma_device_type": 1 00:09:32.112 }, 00:09:32.112 { 00:09:32.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.112 "dma_device_type": 2 00:09:32.112 } 00:09:32.112 ], 00:09:32.112 "driver_specific": {} 00:09:32.112 } 00:09:32.112 ] 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.112 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.113 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.113 "name": "Existed_Raid", 00:09:32.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.113 "strip_size_kb": 64, 00:09:32.113 "state": "configuring", 00:09:32.113 "raid_level": "raid0", 00:09:32.113 "superblock": false, 00:09:32.113 "num_base_bdevs": 4, 00:09:32.113 "num_base_bdevs_discovered": 1, 00:09:32.113 "num_base_bdevs_operational": 4, 00:09:32.113 "base_bdevs_list": [ 00:09:32.113 { 00:09:32.113 "name": "BaseBdev1", 00:09:32.113 "uuid": "b96b4cd5-f5cb-4291-befb-73397c47e64d", 00:09:32.113 "is_configured": true, 00:09:32.113 "data_offset": 0, 00:09:32.113 "data_size": 65536 00:09:32.113 }, 00:09:32.113 { 00:09:32.113 "name": "BaseBdev2", 00:09:32.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.113 "is_configured": false, 00:09:32.113 "data_offset": 0, 00:09:32.113 "data_size": 0 00:09:32.113 }, 00:09:32.113 { 00:09:32.113 "name": "BaseBdev3", 00:09:32.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.113 "is_configured": false, 00:09:32.113 "data_offset": 0, 00:09:32.113 "data_size": 0 00:09:32.113 }, 00:09:32.113 { 00:09:32.113 "name": "BaseBdev4", 00:09:32.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.113 "is_configured": false, 00:09:32.113 "data_offset": 0, 00:09:32.113 "data_size": 0 00:09:32.113 } 00:09:32.113 ] 00:09:32.113 }' 00:09:32.113 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.113 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.683 [2024-11-18 23:04:51.799501] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.683 [2024-11-18 23:04:51.799545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.683 [2024-11-18 23:04:51.811515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.683 [2024-11-18 23:04:51.813420] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.683 [2024-11-18 23:04:51.813457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.683 [2024-11-18 23:04:51.813466] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.683 [2024-11-18 23:04:51.813475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.683 [2024-11-18 23:04:51.813481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:32.683 [2024-11-18 23:04:51.813489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.683 "name": "Existed_Raid", 00:09:32.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.683 "strip_size_kb": 64, 00:09:32.683 "state": "configuring", 00:09:32.683 "raid_level": "raid0", 00:09:32.683 "superblock": false, 00:09:32.683 "num_base_bdevs": 4, 00:09:32.683 "num_base_bdevs_discovered": 1, 00:09:32.683 "num_base_bdevs_operational": 4, 00:09:32.683 "base_bdevs_list": [ 00:09:32.683 { 00:09:32.683 "name": "BaseBdev1", 00:09:32.683 "uuid": "b96b4cd5-f5cb-4291-befb-73397c47e64d", 00:09:32.683 "is_configured": true, 00:09:32.683 "data_offset": 0, 00:09:32.683 "data_size": 65536 00:09:32.683 }, 00:09:32.683 { 00:09:32.683 "name": "BaseBdev2", 00:09:32.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.683 "is_configured": false, 00:09:32.683 "data_offset": 0, 00:09:32.683 "data_size": 0 00:09:32.683 }, 00:09:32.683 { 00:09:32.683 "name": "BaseBdev3", 00:09:32.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.683 "is_configured": false, 00:09:32.683 "data_offset": 0, 00:09:32.683 "data_size": 0 00:09:32.683 }, 00:09:32.683 { 00:09:32.683 "name": "BaseBdev4", 00:09:32.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.683 "is_configured": false, 00:09:32.683 "data_offset": 0, 00:09:32.683 "data_size": 0 00:09:32.683 } 00:09:32.683 ] 00:09:32.683 }' 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.683 23:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.944 [2024-11-18 23:04:52.243253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.944 BaseBdev2 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.944 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.945 [ 00:09:32.945 { 00:09:32.945 "name": "BaseBdev2", 00:09:32.945 "aliases": [ 00:09:32.945 "b439a8ed-b2d4-4c49-a375-ca9c32d8a722" 00:09:32.945 ], 00:09:32.945 "product_name": "Malloc disk", 00:09:32.945 "block_size": 512, 00:09:32.945 "num_blocks": 65536, 00:09:32.945 "uuid": "b439a8ed-b2d4-4c49-a375-ca9c32d8a722", 00:09:32.945 "assigned_rate_limits": { 00:09:32.945 "rw_ios_per_sec": 0, 00:09:32.945 "rw_mbytes_per_sec": 0, 00:09:32.945 "r_mbytes_per_sec": 0, 00:09:32.945 "w_mbytes_per_sec": 0 00:09:32.945 }, 00:09:32.945 "claimed": true, 00:09:32.945 "claim_type": "exclusive_write", 00:09:32.945 "zoned": false, 00:09:32.945 "supported_io_types": { 00:09:32.945 "read": true, 00:09:32.945 "write": true, 00:09:32.945 "unmap": true, 00:09:32.945 "flush": true, 00:09:32.945 "reset": true, 00:09:32.945 "nvme_admin": false, 00:09:32.945 "nvme_io": false, 00:09:32.945 "nvme_io_md": false, 00:09:32.945 "write_zeroes": true, 00:09:32.945 "zcopy": true, 00:09:32.945 "get_zone_info": false, 00:09:32.945 "zone_management": false, 00:09:32.945 "zone_append": false, 00:09:32.945 "compare": false, 00:09:32.945 "compare_and_write": false, 00:09:32.945 "abort": true, 00:09:32.945 "seek_hole": false, 00:09:32.945 "seek_data": false, 00:09:32.945 "copy": true, 00:09:32.945 "nvme_iov_md": false 00:09:32.945 }, 00:09:32.945 "memory_domains": [ 00:09:32.945 { 00:09:32.945 "dma_device_id": "system", 00:09:32.945 "dma_device_type": 1 00:09:32.945 }, 00:09:32.945 { 00:09:32.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.945 "dma_device_type": 2 00:09:32.945 } 00:09:32.945 ], 00:09:32.945 "driver_specific": {} 00:09:32.945 } 00:09:32.945 ] 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.945 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.205 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.205 "name": "Existed_Raid", 00:09:33.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.205 "strip_size_kb": 64, 00:09:33.205 "state": "configuring", 00:09:33.205 "raid_level": "raid0", 00:09:33.205 "superblock": false, 00:09:33.205 "num_base_bdevs": 4, 00:09:33.205 "num_base_bdevs_discovered": 2, 00:09:33.205 "num_base_bdevs_operational": 4, 00:09:33.205 "base_bdevs_list": [ 00:09:33.205 { 00:09:33.205 "name": "BaseBdev1", 00:09:33.205 "uuid": "b96b4cd5-f5cb-4291-befb-73397c47e64d", 00:09:33.205 "is_configured": true, 00:09:33.205 "data_offset": 0, 00:09:33.205 "data_size": 65536 00:09:33.205 }, 00:09:33.205 { 00:09:33.205 "name": "BaseBdev2", 00:09:33.205 "uuid": "b439a8ed-b2d4-4c49-a375-ca9c32d8a722", 00:09:33.205 "is_configured": true, 00:09:33.205 "data_offset": 0, 00:09:33.205 "data_size": 65536 00:09:33.205 }, 00:09:33.205 { 00:09:33.205 "name": "BaseBdev3", 00:09:33.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.205 "is_configured": false, 00:09:33.205 "data_offset": 0, 00:09:33.205 "data_size": 0 00:09:33.205 }, 00:09:33.205 { 00:09:33.205 "name": "BaseBdev4", 00:09:33.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.205 "is_configured": false, 00:09:33.205 "data_offset": 0, 00:09:33.205 "data_size": 0 00:09:33.205 } 00:09:33.205 ] 00:09:33.205 }' 00:09:33.205 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.205 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.466 [2024-11-18 23:04:52.713317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.466 BaseBdev3 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.466 [ 00:09:33.466 { 00:09:33.466 "name": "BaseBdev3", 00:09:33.466 "aliases": [ 00:09:33.466 "c77bfbc8-a7d7-40b3-8e26-726f784253c8" 00:09:33.466 ], 00:09:33.466 "product_name": "Malloc disk", 00:09:33.466 "block_size": 512, 00:09:33.466 "num_blocks": 65536, 00:09:33.466 "uuid": "c77bfbc8-a7d7-40b3-8e26-726f784253c8", 00:09:33.466 "assigned_rate_limits": { 00:09:33.466 "rw_ios_per_sec": 0, 00:09:33.466 "rw_mbytes_per_sec": 0, 00:09:33.466 "r_mbytes_per_sec": 0, 00:09:33.466 "w_mbytes_per_sec": 0 00:09:33.466 }, 00:09:33.466 "claimed": true, 00:09:33.466 "claim_type": "exclusive_write", 00:09:33.466 "zoned": false, 00:09:33.466 "supported_io_types": { 00:09:33.466 "read": true, 00:09:33.466 "write": true, 00:09:33.466 "unmap": true, 00:09:33.466 "flush": true, 00:09:33.466 "reset": true, 00:09:33.466 "nvme_admin": false, 00:09:33.466 "nvme_io": false, 00:09:33.466 "nvme_io_md": false, 00:09:33.466 "write_zeroes": true, 00:09:33.466 "zcopy": true, 00:09:33.466 "get_zone_info": false, 00:09:33.466 "zone_management": false, 00:09:33.466 "zone_append": false, 00:09:33.466 "compare": false, 00:09:33.466 "compare_and_write": false, 00:09:33.466 "abort": true, 00:09:33.466 "seek_hole": false, 00:09:33.466 "seek_data": false, 00:09:33.466 "copy": true, 00:09:33.466 "nvme_iov_md": false 00:09:33.466 }, 00:09:33.466 "memory_domains": [ 00:09:33.466 { 00:09:33.466 "dma_device_id": "system", 00:09:33.466 "dma_device_type": 1 00:09:33.466 }, 00:09:33.466 { 00:09:33.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.466 "dma_device_type": 2 00:09:33.466 } 00:09:33.466 ], 00:09:33.466 "driver_specific": {} 00:09:33.466 } 00:09:33.466 ] 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.466 "name": "Existed_Raid", 00:09:33.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.466 "strip_size_kb": 64, 00:09:33.466 "state": "configuring", 00:09:33.466 "raid_level": "raid0", 00:09:33.466 "superblock": false, 00:09:33.466 "num_base_bdevs": 4, 00:09:33.466 "num_base_bdevs_discovered": 3, 00:09:33.466 "num_base_bdevs_operational": 4, 00:09:33.466 "base_bdevs_list": [ 00:09:33.466 { 00:09:33.466 "name": "BaseBdev1", 00:09:33.466 "uuid": "b96b4cd5-f5cb-4291-befb-73397c47e64d", 00:09:33.466 "is_configured": true, 00:09:33.466 "data_offset": 0, 00:09:33.466 "data_size": 65536 00:09:33.466 }, 00:09:33.466 { 00:09:33.466 "name": "BaseBdev2", 00:09:33.466 "uuid": "b439a8ed-b2d4-4c49-a375-ca9c32d8a722", 00:09:33.466 "is_configured": true, 00:09:33.466 "data_offset": 0, 00:09:33.466 "data_size": 65536 00:09:33.466 }, 00:09:33.466 { 00:09:33.466 "name": "BaseBdev3", 00:09:33.466 "uuid": "c77bfbc8-a7d7-40b3-8e26-726f784253c8", 00:09:33.466 "is_configured": true, 00:09:33.466 "data_offset": 0, 00:09:33.466 "data_size": 65536 00:09:33.466 }, 00:09:33.466 { 00:09:33.466 "name": "BaseBdev4", 00:09:33.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.466 "is_configured": false, 00:09:33.466 "data_offset": 0, 00:09:33.466 "data_size": 0 00:09:33.466 } 00:09:33.466 ] 00:09:33.466 }' 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.466 23:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.035 [2024-11-18 23:04:53.223401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:34.035 [2024-11-18 23:04:53.223505] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:34.035 [2024-11-18 23:04:53.223533] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:34.035 [2024-11-18 23:04:53.223857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:34.035 [2024-11-18 23:04:53.224047] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:34.035 [2024-11-18 23:04:53.224092] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:34.035 [2024-11-18 23:04:53.224350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.035 BaseBdev4 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.035 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.035 [ 00:09:34.035 { 00:09:34.035 "name": "BaseBdev4", 00:09:34.035 "aliases": [ 00:09:34.035 "eaab9847-7d8b-4930-ac95-ae1b5c755e16" 00:09:34.035 ], 00:09:34.035 "product_name": "Malloc disk", 00:09:34.035 "block_size": 512, 00:09:34.035 "num_blocks": 65536, 00:09:34.035 "uuid": "eaab9847-7d8b-4930-ac95-ae1b5c755e16", 00:09:34.035 "assigned_rate_limits": { 00:09:34.035 "rw_ios_per_sec": 0, 00:09:34.035 "rw_mbytes_per_sec": 0, 00:09:34.035 "r_mbytes_per_sec": 0, 00:09:34.035 "w_mbytes_per_sec": 0 00:09:34.035 }, 00:09:34.035 "claimed": true, 00:09:34.035 "claim_type": "exclusive_write", 00:09:34.035 "zoned": false, 00:09:34.035 "supported_io_types": { 00:09:34.035 "read": true, 00:09:34.035 "write": true, 00:09:34.035 "unmap": true, 00:09:34.035 "flush": true, 00:09:34.035 "reset": true, 00:09:34.036 "nvme_admin": false, 00:09:34.036 "nvme_io": false, 00:09:34.036 "nvme_io_md": false, 00:09:34.036 "write_zeroes": true, 00:09:34.036 "zcopy": true, 00:09:34.036 "get_zone_info": false, 00:09:34.036 "zone_management": false, 00:09:34.036 "zone_append": false, 00:09:34.036 "compare": false, 00:09:34.036 "compare_and_write": false, 00:09:34.036 "abort": true, 00:09:34.036 "seek_hole": false, 00:09:34.036 "seek_data": false, 00:09:34.036 "copy": true, 00:09:34.036 "nvme_iov_md": false 00:09:34.036 }, 00:09:34.036 "memory_domains": [ 00:09:34.036 { 00:09:34.036 "dma_device_id": "system", 00:09:34.036 "dma_device_type": 1 00:09:34.036 }, 00:09:34.036 { 00:09:34.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.036 "dma_device_type": 2 00:09:34.036 } 00:09:34.036 ], 00:09:34.036 "driver_specific": {} 00:09:34.036 } 00:09:34.036 ] 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.036 "name": "Existed_Raid", 00:09:34.036 "uuid": "dc06de8d-4cc2-4cd8-9165-7eb92a0004c8", 00:09:34.036 "strip_size_kb": 64, 00:09:34.036 "state": "online", 00:09:34.036 "raid_level": "raid0", 00:09:34.036 "superblock": false, 00:09:34.036 "num_base_bdevs": 4, 00:09:34.036 "num_base_bdevs_discovered": 4, 00:09:34.036 "num_base_bdevs_operational": 4, 00:09:34.036 "base_bdevs_list": [ 00:09:34.036 { 00:09:34.036 "name": "BaseBdev1", 00:09:34.036 "uuid": "b96b4cd5-f5cb-4291-befb-73397c47e64d", 00:09:34.036 "is_configured": true, 00:09:34.036 "data_offset": 0, 00:09:34.036 "data_size": 65536 00:09:34.036 }, 00:09:34.036 { 00:09:34.036 "name": "BaseBdev2", 00:09:34.036 "uuid": "b439a8ed-b2d4-4c49-a375-ca9c32d8a722", 00:09:34.036 "is_configured": true, 00:09:34.036 "data_offset": 0, 00:09:34.036 "data_size": 65536 00:09:34.036 }, 00:09:34.036 { 00:09:34.036 "name": "BaseBdev3", 00:09:34.036 "uuid": "c77bfbc8-a7d7-40b3-8e26-726f784253c8", 00:09:34.036 "is_configured": true, 00:09:34.036 "data_offset": 0, 00:09:34.036 "data_size": 65536 00:09:34.036 }, 00:09:34.036 { 00:09:34.036 "name": "BaseBdev4", 00:09:34.036 "uuid": "eaab9847-7d8b-4930-ac95-ae1b5c755e16", 00:09:34.036 "is_configured": true, 00:09:34.036 "data_offset": 0, 00:09:34.036 "data_size": 65536 00:09:34.036 } 00:09:34.036 ] 00:09:34.036 }' 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.036 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.605 [2024-11-18 23:04:53.682960] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.605 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.605 "name": "Existed_Raid", 00:09:34.605 "aliases": [ 00:09:34.605 "dc06de8d-4cc2-4cd8-9165-7eb92a0004c8" 00:09:34.605 ], 00:09:34.605 "product_name": "Raid Volume", 00:09:34.605 "block_size": 512, 00:09:34.605 "num_blocks": 262144, 00:09:34.605 "uuid": "dc06de8d-4cc2-4cd8-9165-7eb92a0004c8", 00:09:34.605 "assigned_rate_limits": { 00:09:34.605 "rw_ios_per_sec": 0, 00:09:34.605 "rw_mbytes_per_sec": 0, 00:09:34.605 "r_mbytes_per_sec": 0, 00:09:34.605 "w_mbytes_per_sec": 0 00:09:34.605 }, 00:09:34.605 "claimed": false, 00:09:34.605 "zoned": false, 00:09:34.605 "supported_io_types": { 00:09:34.605 "read": true, 00:09:34.605 "write": true, 00:09:34.605 "unmap": true, 00:09:34.606 "flush": true, 00:09:34.606 "reset": true, 00:09:34.606 "nvme_admin": false, 00:09:34.606 "nvme_io": false, 00:09:34.606 "nvme_io_md": false, 00:09:34.606 "write_zeroes": true, 00:09:34.606 "zcopy": false, 00:09:34.606 "get_zone_info": false, 00:09:34.606 "zone_management": false, 00:09:34.606 "zone_append": false, 00:09:34.606 "compare": false, 00:09:34.606 "compare_and_write": false, 00:09:34.606 "abort": false, 00:09:34.606 "seek_hole": false, 00:09:34.606 "seek_data": false, 00:09:34.606 "copy": false, 00:09:34.606 "nvme_iov_md": false 00:09:34.606 }, 00:09:34.606 "memory_domains": [ 00:09:34.606 { 00:09:34.606 "dma_device_id": "system", 00:09:34.606 "dma_device_type": 1 00:09:34.606 }, 00:09:34.606 { 00:09:34.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.606 "dma_device_type": 2 00:09:34.606 }, 00:09:34.606 { 00:09:34.606 "dma_device_id": "system", 00:09:34.606 "dma_device_type": 1 00:09:34.606 }, 00:09:34.606 { 00:09:34.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.606 "dma_device_type": 2 00:09:34.606 }, 00:09:34.606 { 00:09:34.606 "dma_device_id": "system", 00:09:34.606 "dma_device_type": 1 00:09:34.606 }, 00:09:34.606 { 00:09:34.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.606 "dma_device_type": 2 00:09:34.606 }, 00:09:34.606 { 00:09:34.606 "dma_device_id": "system", 00:09:34.606 "dma_device_type": 1 00:09:34.606 }, 00:09:34.606 { 00:09:34.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.606 "dma_device_type": 2 00:09:34.606 } 00:09:34.606 ], 00:09:34.606 "driver_specific": { 00:09:34.606 "raid": { 00:09:34.606 "uuid": "dc06de8d-4cc2-4cd8-9165-7eb92a0004c8", 00:09:34.606 "strip_size_kb": 64, 00:09:34.606 "state": "online", 00:09:34.606 "raid_level": "raid0", 00:09:34.606 "superblock": false, 00:09:34.606 "num_base_bdevs": 4, 00:09:34.606 "num_base_bdevs_discovered": 4, 00:09:34.606 "num_base_bdevs_operational": 4, 00:09:34.606 "base_bdevs_list": [ 00:09:34.606 { 00:09:34.606 "name": "BaseBdev1", 00:09:34.606 "uuid": "b96b4cd5-f5cb-4291-befb-73397c47e64d", 00:09:34.606 "is_configured": true, 00:09:34.606 "data_offset": 0, 00:09:34.606 "data_size": 65536 00:09:34.606 }, 00:09:34.606 { 00:09:34.606 "name": "BaseBdev2", 00:09:34.606 "uuid": "b439a8ed-b2d4-4c49-a375-ca9c32d8a722", 00:09:34.606 "is_configured": true, 00:09:34.606 "data_offset": 0, 00:09:34.606 "data_size": 65536 00:09:34.606 }, 00:09:34.606 { 00:09:34.606 "name": "BaseBdev3", 00:09:34.606 "uuid": "c77bfbc8-a7d7-40b3-8e26-726f784253c8", 00:09:34.606 "is_configured": true, 00:09:34.606 "data_offset": 0, 00:09:34.606 "data_size": 65536 00:09:34.606 }, 00:09:34.606 { 00:09:34.606 "name": "BaseBdev4", 00:09:34.606 "uuid": "eaab9847-7d8b-4930-ac95-ae1b5c755e16", 00:09:34.606 "is_configured": true, 00:09:34.606 "data_offset": 0, 00:09:34.606 "data_size": 65536 00:09:34.606 } 00:09:34.606 ] 00:09:34.606 } 00:09:34.606 } 00:09:34.606 }' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:34.606 BaseBdev2 00:09:34.606 BaseBdev3 00:09:34.606 BaseBdev4' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.606 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.866 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.866 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.866 23:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:34.866 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.866 23:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.866 [2024-11-18 23:04:53.998135] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.866 [2024-11-18 23:04:53.998164] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.866 [2024-11-18 23:04:53.998209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.866 "name": "Existed_Raid", 00:09:34.866 "uuid": "dc06de8d-4cc2-4cd8-9165-7eb92a0004c8", 00:09:34.866 "strip_size_kb": 64, 00:09:34.866 "state": "offline", 00:09:34.866 "raid_level": "raid0", 00:09:34.866 "superblock": false, 00:09:34.866 "num_base_bdevs": 4, 00:09:34.866 "num_base_bdevs_discovered": 3, 00:09:34.866 "num_base_bdevs_operational": 3, 00:09:34.866 "base_bdevs_list": [ 00:09:34.866 { 00:09:34.866 "name": null, 00:09:34.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.866 "is_configured": false, 00:09:34.866 "data_offset": 0, 00:09:34.866 "data_size": 65536 00:09:34.866 }, 00:09:34.866 { 00:09:34.866 "name": "BaseBdev2", 00:09:34.866 "uuid": "b439a8ed-b2d4-4c49-a375-ca9c32d8a722", 00:09:34.866 "is_configured": true, 00:09:34.866 "data_offset": 0, 00:09:34.866 "data_size": 65536 00:09:34.866 }, 00:09:34.866 { 00:09:34.866 "name": "BaseBdev3", 00:09:34.866 "uuid": "c77bfbc8-a7d7-40b3-8e26-726f784253c8", 00:09:34.866 "is_configured": true, 00:09:34.866 "data_offset": 0, 00:09:34.866 "data_size": 65536 00:09:34.866 }, 00:09:34.866 { 00:09:34.866 "name": "BaseBdev4", 00:09:34.866 "uuid": "eaab9847-7d8b-4930-ac95-ae1b5c755e16", 00:09:34.866 "is_configured": true, 00:09:34.866 "data_offset": 0, 00:09:34.866 "data_size": 65536 00:09:34.866 } 00:09:34.866 ] 00:09:34.866 }' 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.866 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.126 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.127 [2024-11-18 23:04:54.432719] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.127 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.388 [2024-11-18 23:04:54.503838] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.388 [2024-11-18 23:04:54.574973] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:35.388 [2024-11-18 23:04:54.575064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.388 BaseBdev2 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.388 [ 00:09:35.388 { 00:09:35.388 "name": "BaseBdev2", 00:09:35.388 "aliases": [ 00:09:35.388 "cdcccdb0-d991-48db-a56a-a2c680f18e40" 00:09:35.388 ], 00:09:35.388 "product_name": "Malloc disk", 00:09:35.388 "block_size": 512, 00:09:35.388 "num_blocks": 65536, 00:09:35.388 "uuid": "cdcccdb0-d991-48db-a56a-a2c680f18e40", 00:09:35.388 "assigned_rate_limits": { 00:09:35.388 "rw_ios_per_sec": 0, 00:09:35.388 "rw_mbytes_per_sec": 0, 00:09:35.388 "r_mbytes_per_sec": 0, 00:09:35.388 "w_mbytes_per_sec": 0 00:09:35.388 }, 00:09:35.388 "claimed": false, 00:09:35.388 "zoned": false, 00:09:35.388 "supported_io_types": { 00:09:35.388 "read": true, 00:09:35.388 "write": true, 00:09:35.388 "unmap": true, 00:09:35.388 "flush": true, 00:09:35.388 "reset": true, 00:09:35.388 "nvme_admin": false, 00:09:35.388 "nvme_io": false, 00:09:35.388 "nvme_io_md": false, 00:09:35.388 "write_zeroes": true, 00:09:35.388 "zcopy": true, 00:09:35.388 "get_zone_info": false, 00:09:35.388 "zone_management": false, 00:09:35.388 "zone_append": false, 00:09:35.388 "compare": false, 00:09:35.388 "compare_and_write": false, 00:09:35.388 "abort": true, 00:09:35.388 "seek_hole": false, 00:09:35.388 "seek_data": false, 00:09:35.388 "copy": true, 00:09:35.388 "nvme_iov_md": false 00:09:35.388 }, 00:09:35.388 "memory_domains": [ 00:09:35.388 { 00:09:35.388 "dma_device_id": "system", 00:09:35.388 "dma_device_type": 1 00:09:35.388 }, 00:09:35.388 { 00:09:35.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.388 "dma_device_type": 2 00:09:35.388 } 00:09:35.388 ], 00:09:35.388 "driver_specific": {} 00:09:35.388 } 00:09:35.388 ] 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.388 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.389 BaseBdev3 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.389 [ 00:09:35.389 { 00:09:35.389 "name": "BaseBdev3", 00:09:35.389 "aliases": [ 00:09:35.389 "b009abd6-b571-441d-976e-bdd80fead776" 00:09:35.389 ], 00:09:35.389 "product_name": "Malloc disk", 00:09:35.389 "block_size": 512, 00:09:35.389 "num_blocks": 65536, 00:09:35.389 "uuid": "b009abd6-b571-441d-976e-bdd80fead776", 00:09:35.389 "assigned_rate_limits": { 00:09:35.389 "rw_ios_per_sec": 0, 00:09:35.389 "rw_mbytes_per_sec": 0, 00:09:35.389 "r_mbytes_per_sec": 0, 00:09:35.389 "w_mbytes_per_sec": 0 00:09:35.389 }, 00:09:35.389 "claimed": false, 00:09:35.389 "zoned": false, 00:09:35.389 "supported_io_types": { 00:09:35.389 "read": true, 00:09:35.389 "write": true, 00:09:35.389 "unmap": true, 00:09:35.389 "flush": true, 00:09:35.389 "reset": true, 00:09:35.389 "nvme_admin": false, 00:09:35.389 "nvme_io": false, 00:09:35.389 "nvme_io_md": false, 00:09:35.389 "write_zeroes": true, 00:09:35.389 "zcopy": true, 00:09:35.389 "get_zone_info": false, 00:09:35.389 "zone_management": false, 00:09:35.389 "zone_append": false, 00:09:35.389 "compare": false, 00:09:35.389 "compare_and_write": false, 00:09:35.389 "abort": true, 00:09:35.389 "seek_hole": false, 00:09:35.389 "seek_data": false, 00:09:35.389 "copy": true, 00:09:35.389 "nvme_iov_md": false 00:09:35.389 }, 00:09:35.389 "memory_domains": [ 00:09:35.389 { 00:09:35.389 "dma_device_id": "system", 00:09:35.389 "dma_device_type": 1 00:09:35.389 }, 00:09:35.389 { 00:09:35.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.389 "dma_device_type": 2 00:09:35.389 } 00:09:35.389 ], 00:09:35.389 "driver_specific": {} 00:09:35.389 } 00:09:35.389 ] 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.389 BaseBdev4 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.389 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.652 [ 00:09:35.652 { 00:09:35.652 "name": "BaseBdev4", 00:09:35.652 "aliases": [ 00:09:35.652 "3d979ce0-cfda-40a0-a73b-40b2f7a5bad8" 00:09:35.652 ], 00:09:35.652 "product_name": "Malloc disk", 00:09:35.652 "block_size": 512, 00:09:35.652 "num_blocks": 65536, 00:09:35.652 "uuid": "3d979ce0-cfda-40a0-a73b-40b2f7a5bad8", 00:09:35.652 "assigned_rate_limits": { 00:09:35.652 "rw_ios_per_sec": 0, 00:09:35.652 "rw_mbytes_per_sec": 0, 00:09:35.652 "r_mbytes_per_sec": 0, 00:09:35.652 "w_mbytes_per_sec": 0 00:09:35.652 }, 00:09:35.652 "claimed": false, 00:09:35.652 "zoned": false, 00:09:35.652 "supported_io_types": { 00:09:35.652 "read": true, 00:09:35.652 "write": true, 00:09:35.652 "unmap": true, 00:09:35.652 "flush": true, 00:09:35.652 "reset": true, 00:09:35.652 "nvme_admin": false, 00:09:35.652 "nvme_io": false, 00:09:35.652 "nvme_io_md": false, 00:09:35.652 "write_zeroes": true, 00:09:35.652 "zcopy": true, 00:09:35.652 "get_zone_info": false, 00:09:35.652 "zone_management": false, 00:09:35.652 "zone_append": false, 00:09:35.652 "compare": false, 00:09:35.652 "compare_and_write": false, 00:09:35.652 "abort": true, 00:09:35.652 "seek_hole": false, 00:09:35.652 "seek_data": false, 00:09:35.652 "copy": true, 00:09:35.652 "nvme_iov_md": false 00:09:35.652 }, 00:09:35.652 "memory_domains": [ 00:09:35.652 { 00:09:35.652 "dma_device_id": "system", 00:09:35.652 "dma_device_type": 1 00:09:35.652 }, 00:09:35.652 { 00:09:35.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.652 "dma_device_type": 2 00:09:35.652 } 00:09:35.652 ], 00:09:35.652 "driver_specific": {} 00:09:35.652 } 00:09:35.652 ] 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.652 [2024-11-18 23:04:54.806033] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.652 [2024-11-18 23:04:54.806132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.652 [2024-11-18 23:04:54.806173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.652 [2024-11-18 23:04:54.807955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.652 [2024-11-18 23:04:54.808046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.652 "name": "Existed_Raid", 00:09:35.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.652 "strip_size_kb": 64, 00:09:35.652 "state": "configuring", 00:09:35.652 "raid_level": "raid0", 00:09:35.652 "superblock": false, 00:09:35.652 "num_base_bdevs": 4, 00:09:35.652 "num_base_bdevs_discovered": 3, 00:09:35.652 "num_base_bdevs_operational": 4, 00:09:35.652 "base_bdevs_list": [ 00:09:35.652 { 00:09:35.652 "name": "BaseBdev1", 00:09:35.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.652 "is_configured": false, 00:09:35.652 "data_offset": 0, 00:09:35.652 "data_size": 0 00:09:35.652 }, 00:09:35.652 { 00:09:35.652 "name": "BaseBdev2", 00:09:35.652 "uuid": "cdcccdb0-d991-48db-a56a-a2c680f18e40", 00:09:35.652 "is_configured": true, 00:09:35.652 "data_offset": 0, 00:09:35.652 "data_size": 65536 00:09:35.652 }, 00:09:35.652 { 00:09:35.652 "name": "BaseBdev3", 00:09:35.652 "uuid": "b009abd6-b571-441d-976e-bdd80fead776", 00:09:35.652 "is_configured": true, 00:09:35.652 "data_offset": 0, 00:09:35.652 "data_size": 65536 00:09:35.652 }, 00:09:35.652 { 00:09:35.652 "name": "BaseBdev4", 00:09:35.652 "uuid": "3d979ce0-cfda-40a0-a73b-40b2f7a5bad8", 00:09:35.652 "is_configured": true, 00:09:35.652 "data_offset": 0, 00:09:35.652 "data_size": 65536 00:09:35.652 } 00:09:35.652 ] 00:09:35.652 }' 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.652 23:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.912 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:35.912 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.912 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.912 [2024-11-18 23:04:55.237308] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.912 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.912 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:35.912 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.912 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.912 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.913 "name": "Existed_Raid", 00:09:35.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.913 "strip_size_kb": 64, 00:09:35.913 "state": "configuring", 00:09:35.913 "raid_level": "raid0", 00:09:35.913 "superblock": false, 00:09:35.913 "num_base_bdevs": 4, 00:09:35.913 "num_base_bdevs_discovered": 2, 00:09:35.913 "num_base_bdevs_operational": 4, 00:09:35.913 "base_bdevs_list": [ 00:09:35.913 { 00:09:35.913 "name": "BaseBdev1", 00:09:35.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.913 "is_configured": false, 00:09:35.913 "data_offset": 0, 00:09:35.913 "data_size": 0 00:09:35.913 }, 00:09:35.913 { 00:09:35.913 "name": null, 00:09:35.913 "uuid": "cdcccdb0-d991-48db-a56a-a2c680f18e40", 00:09:35.913 "is_configured": false, 00:09:35.913 "data_offset": 0, 00:09:35.913 "data_size": 65536 00:09:35.913 }, 00:09:35.913 { 00:09:35.913 "name": "BaseBdev3", 00:09:35.913 "uuid": "b009abd6-b571-441d-976e-bdd80fead776", 00:09:35.913 "is_configured": true, 00:09:35.913 "data_offset": 0, 00:09:35.913 "data_size": 65536 00:09:35.913 }, 00:09:35.913 { 00:09:35.913 "name": "BaseBdev4", 00:09:35.913 "uuid": "3d979ce0-cfda-40a0-a73b-40b2f7a5bad8", 00:09:35.913 "is_configured": true, 00:09:35.913 "data_offset": 0, 00:09:35.913 "data_size": 65536 00:09:35.913 } 00:09:35.913 ] 00:09:35.913 }' 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.913 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.507 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.507 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.508 BaseBdev1 00:09:36.508 [2024-11-18 23:04:55.723389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.508 [ 00:09:36.508 { 00:09:36.508 "name": "BaseBdev1", 00:09:36.508 "aliases": [ 00:09:36.508 "dbee3df4-575c-4784-a668-a5ce3259dc46" 00:09:36.508 ], 00:09:36.508 "product_name": "Malloc disk", 00:09:36.508 "block_size": 512, 00:09:36.508 "num_blocks": 65536, 00:09:36.508 "uuid": "dbee3df4-575c-4784-a668-a5ce3259dc46", 00:09:36.508 "assigned_rate_limits": { 00:09:36.508 "rw_ios_per_sec": 0, 00:09:36.508 "rw_mbytes_per_sec": 0, 00:09:36.508 "r_mbytes_per_sec": 0, 00:09:36.508 "w_mbytes_per_sec": 0 00:09:36.508 }, 00:09:36.508 "claimed": true, 00:09:36.508 "claim_type": "exclusive_write", 00:09:36.508 "zoned": false, 00:09:36.508 "supported_io_types": { 00:09:36.508 "read": true, 00:09:36.508 "write": true, 00:09:36.508 "unmap": true, 00:09:36.508 "flush": true, 00:09:36.508 "reset": true, 00:09:36.508 "nvme_admin": false, 00:09:36.508 "nvme_io": false, 00:09:36.508 "nvme_io_md": false, 00:09:36.508 "write_zeroes": true, 00:09:36.508 "zcopy": true, 00:09:36.508 "get_zone_info": false, 00:09:36.508 "zone_management": false, 00:09:36.508 "zone_append": false, 00:09:36.508 "compare": false, 00:09:36.508 "compare_and_write": false, 00:09:36.508 "abort": true, 00:09:36.508 "seek_hole": false, 00:09:36.508 "seek_data": false, 00:09:36.508 "copy": true, 00:09:36.508 "nvme_iov_md": false 00:09:36.508 }, 00:09:36.508 "memory_domains": [ 00:09:36.508 { 00:09:36.508 "dma_device_id": "system", 00:09:36.508 "dma_device_type": 1 00:09:36.508 }, 00:09:36.508 { 00:09:36.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.508 "dma_device_type": 2 00:09:36.508 } 00:09:36.508 ], 00:09:36.508 "driver_specific": {} 00:09:36.508 } 00:09:36.508 ] 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.508 "name": "Existed_Raid", 00:09:36.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.508 "strip_size_kb": 64, 00:09:36.508 "state": "configuring", 00:09:36.508 "raid_level": "raid0", 00:09:36.508 "superblock": false, 00:09:36.508 "num_base_bdevs": 4, 00:09:36.508 "num_base_bdevs_discovered": 3, 00:09:36.508 "num_base_bdevs_operational": 4, 00:09:36.508 "base_bdevs_list": [ 00:09:36.508 { 00:09:36.508 "name": "BaseBdev1", 00:09:36.508 "uuid": "dbee3df4-575c-4784-a668-a5ce3259dc46", 00:09:36.508 "is_configured": true, 00:09:36.508 "data_offset": 0, 00:09:36.508 "data_size": 65536 00:09:36.508 }, 00:09:36.508 { 00:09:36.508 "name": null, 00:09:36.508 "uuid": "cdcccdb0-d991-48db-a56a-a2c680f18e40", 00:09:36.508 "is_configured": false, 00:09:36.508 "data_offset": 0, 00:09:36.508 "data_size": 65536 00:09:36.508 }, 00:09:36.508 { 00:09:36.508 "name": "BaseBdev3", 00:09:36.508 "uuid": "b009abd6-b571-441d-976e-bdd80fead776", 00:09:36.508 "is_configured": true, 00:09:36.508 "data_offset": 0, 00:09:36.508 "data_size": 65536 00:09:36.508 }, 00:09:36.508 { 00:09:36.508 "name": "BaseBdev4", 00:09:36.508 "uuid": "3d979ce0-cfda-40a0-a73b-40b2f7a5bad8", 00:09:36.508 "is_configured": true, 00:09:36.508 "data_offset": 0, 00:09:36.508 "data_size": 65536 00:09:36.508 } 00:09:36.508 ] 00:09:36.508 }' 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.508 23:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.076 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.076 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.076 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.076 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.076 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.076 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.077 [2024-11-18 23:04:56.198652] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.077 "name": "Existed_Raid", 00:09:37.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.077 "strip_size_kb": 64, 00:09:37.077 "state": "configuring", 00:09:37.077 "raid_level": "raid0", 00:09:37.077 "superblock": false, 00:09:37.077 "num_base_bdevs": 4, 00:09:37.077 "num_base_bdevs_discovered": 2, 00:09:37.077 "num_base_bdevs_operational": 4, 00:09:37.077 "base_bdevs_list": [ 00:09:37.077 { 00:09:37.077 "name": "BaseBdev1", 00:09:37.077 "uuid": "dbee3df4-575c-4784-a668-a5ce3259dc46", 00:09:37.077 "is_configured": true, 00:09:37.077 "data_offset": 0, 00:09:37.077 "data_size": 65536 00:09:37.077 }, 00:09:37.077 { 00:09:37.077 "name": null, 00:09:37.077 "uuid": "cdcccdb0-d991-48db-a56a-a2c680f18e40", 00:09:37.077 "is_configured": false, 00:09:37.077 "data_offset": 0, 00:09:37.077 "data_size": 65536 00:09:37.077 }, 00:09:37.077 { 00:09:37.077 "name": null, 00:09:37.077 "uuid": "b009abd6-b571-441d-976e-bdd80fead776", 00:09:37.077 "is_configured": false, 00:09:37.077 "data_offset": 0, 00:09:37.077 "data_size": 65536 00:09:37.077 }, 00:09:37.077 { 00:09:37.077 "name": "BaseBdev4", 00:09:37.077 "uuid": "3d979ce0-cfda-40a0-a73b-40b2f7a5bad8", 00:09:37.077 "is_configured": true, 00:09:37.077 "data_offset": 0, 00:09:37.077 "data_size": 65536 00:09:37.077 } 00:09:37.077 ] 00:09:37.077 }' 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.077 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.336 [2024-11-18 23:04:56.705853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.336 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.597 "name": "Existed_Raid", 00:09:37.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.597 "strip_size_kb": 64, 00:09:37.597 "state": "configuring", 00:09:37.597 "raid_level": "raid0", 00:09:37.597 "superblock": false, 00:09:37.597 "num_base_bdevs": 4, 00:09:37.597 "num_base_bdevs_discovered": 3, 00:09:37.597 "num_base_bdevs_operational": 4, 00:09:37.597 "base_bdevs_list": [ 00:09:37.597 { 00:09:37.597 "name": "BaseBdev1", 00:09:37.597 "uuid": "dbee3df4-575c-4784-a668-a5ce3259dc46", 00:09:37.597 "is_configured": true, 00:09:37.597 "data_offset": 0, 00:09:37.597 "data_size": 65536 00:09:37.597 }, 00:09:37.597 { 00:09:37.597 "name": null, 00:09:37.597 "uuid": "cdcccdb0-d991-48db-a56a-a2c680f18e40", 00:09:37.597 "is_configured": false, 00:09:37.597 "data_offset": 0, 00:09:37.597 "data_size": 65536 00:09:37.597 }, 00:09:37.597 { 00:09:37.597 "name": "BaseBdev3", 00:09:37.597 "uuid": "b009abd6-b571-441d-976e-bdd80fead776", 00:09:37.597 "is_configured": true, 00:09:37.597 "data_offset": 0, 00:09:37.597 "data_size": 65536 00:09:37.597 }, 00:09:37.597 { 00:09:37.597 "name": "BaseBdev4", 00:09:37.597 "uuid": "3d979ce0-cfda-40a0-a73b-40b2f7a5bad8", 00:09:37.597 "is_configured": true, 00:09:37.597 "data_offset": 0, 00:09:37.597 "data_size": 65536 00:09:37.597 } 00:09:37.597 ] 00:09:37.597 }' 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.597 23:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.857 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.857 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.857 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.857 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.858 [2024-11-18 23:04:57.169073] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.858 "name": "Existed_Raid", 00:09:37.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.858 "strip_size_kb": 64, 00:09:37.858 "state": "configuring", 00:09:37.858 "raid_level": "raid0", 00:09:37.858 "superblock": false, 00:09:37.858 "num_base_bdevs": 4, 00:09:37.858 "num_base_bdevs_discovered": 2, 00:09:37.858 "num_base_bdevs_operational": 4, 00:09:37.858 "base_bdevs_list": [ 00:09:37.858 { 00:09:37.858 "name": null, 00:09:37.858 "uuid": "dbee3df4-575c-4784-a668-a5ce3259dc46", 00:09:37.858 "is_configured": false, 00:09:37.858 "data_offset": 0, 00:09:37.858 "data_size": 65536 00:09:37.858 }, 00:09:37.858 { 00:09:37.858 "name": null, 00:09:37.858 "uuid": "cdcccdb0-d991-48db-a56a-a2c680f18e40", 00:09:37.858 "is_configured": false, 00:09:37.858 "data_offset": 0, 00:09:37.858 "data_size": 65536 00:09:37.858 }, 00:09:37.858 { 00:09:37.858 "name": "BaseBdev3", 00:09:37.858 "uuid": "b009abd6-b571-441d-976e-bdd80fead776", 00:09:37.858 "is_configured": true, 00:09:37.858 "data_offset": 0, 00:09:37.858 "data_size": 65536 00:09:37.858 }, 00:09:37.858 { 00:09:37.858 "name": "BaseBdev4", 00:09:37.858 "uuid": "3d979ce0-cfda-40a0-a73b-40b2f7a5bad8", 00:09:37.858 "is_configured": true, 00:09:37.858 "data_offset": 0, 00:09:37.858 "data_size": 65536 00:09:37.858 } 00:09:37.858 ] 00:09:37.858 }' 00:09:37.858 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.117 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.377 [2024-11-18 23:04:57.634877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.377 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.377 "name": "Existed_Raid", 00:09:38.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.377 "strip_size_kb": 64, 00:09:38.377 "state": "configuring", 00:09:38.377 "raid_level": "raid0", 00:09:38.377 "superblock": false, 00:09:38.377 "num_base_bdevs": 4, 00:09:38.377 "num_base_bdevs_discovered": 3, 00:09:38.377 "num_base_bdevs_operational": 4, 00:09:38.377 "base_bdevs_list": [ 00:09:38.377 { 00:09:38.377 "name": null, 00:09:38.377 "uuid": "dbee3df4-575c-4784-a668-a5ce3259dc46", 00:09:38.377 "is_configured": false, 00:09:38.377 "data_offset": 0, 00:09:38.377 "data_size": 65536 00:09:38.377 }, 00:09:38.377 { 00:09:38.377 "name": "BaseBdev2", 00:09:38.377 "uuid": "cdcccdb0-d991-48db-a56a-a2c680f18e40", 00:09:38.377 "is_configured": true, 00:09:38.377 "data_offset": 0, 00:09:38.377 "data_size": 65536 00:09:38.377 }, 00:09:38.377 { 00:09:38.377 "name": "BaseBdev3", 00:09:38.377 "uuid": "b009abd6-b571-441d-976e-bdd80fead776", 00:09:38.377 "is_configured": true, 00:09:38.377 "data_offset": 0, 00:09:38.377 "data_size": 65536 00:09:38.377 }, 00:09:38.377 { 00:09:38.378 "name": "BaseBdev4", 00:09:38.378 "uuid": "3d979ce0-cfda-40a0-a73b-40b2f7a5bad8", 00:09:38.378 "is_configured": true, 00:09:38.378 "data_offset": 0, 00:09:38.378 "data_size": 65536 00:09:38.378 } 00:09:38.378 ] 00:09:38.378 }' 00:09:38.378 23:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.378 23:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dbee3df4-575c-4784-a668-a5ce3259dc46 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.948 [2024-11-18 23:04:58.220798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:38.948 [2024-11-18 23:04:58.220898] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:38.948 [2024-11-18 23:04:58.220912] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:38.948 [2024-11-18 23:04:58.221187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:38.948 [2024-11-18 23:04:58.221331] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:38.948 [2024-11-18 23:04:58.221346] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:38.948 [2024-11-18 23:04:58.221518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.948 NewBaseBdev 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.948 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.948 [ 00:09:38.948 { 00:09:38.948 "name": "NewBaseBdev", 00:09:38.948 "aliases": [ 00:09:38.948 "dbee3df4-575c-4784-a668-a5ce3259dc46" 00:09:38.948 ], 00:09:38.948 "product_name": "Malloc disk", 00:09:38.948 "block_size": 512, 00:09:38.948 "num_blocks": 65536, 00:09:38.948 "uuid": "dbee3df4-575c-4784-a668-a5ce3259dc46", 00:09:38.948 "assigned_rate_limits": { 00:09:38.948 "rw_ios_per_sec": 0, 00:09:38.948 "rw_mbytes_per_sec": 0, 00:09:38.948 "r_mbytes_per_sec": 0, 00:09:38.948 "w_mbytes_per_sec": 0 00:09:38.948 }, 00:09:38.948 "claimed": true, 00:09:38.948 "claim_type": "exclusive_write", 00:09:38.948 "zoned": false, 00:09:38.948 "supported_io_types": { 00:09:38.948 "read": true, 00:09:38.948 "write": true, 00:09:38.948 "unmap": true, 00:09:38.948 "flush": true, 00:09:38.948 "reset": true, 00:09:38.948 "nvme_admin": false, 00:09:38.948 "nvme_io": false, 00:09:38.948 "nvme_io_md": false, 00:09:38.948 "write_zeroes": true, 00:09:38.948 "zcopy": true, 00:09:38.948 "get_zone_info": false, 00:09:38.948 "zone_management": false, 00:09:38.948 "zone_append": false, 00:09:38.948 "compare": false, 00:09:38.948 "compare_and_write": false, 00:09:38.948 "abort": true, 00:09:38.948 "seek_hole": false, 00:09:38.948 "seek_data": false, 00:09:38.949 "copy": true, 00:09:38.949 "nvme_iov_md": false 00:09:38.949 }, 00:09:38.949 "memory_domains": [ 00:09:38.949 { 00:09:38.949 "dma_device_id": "system", 00:09:38.949 "dma_device_type": 1 00:09:38.949 }, 00:09:38.949 { 00:09:38.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.949 "dma_device_type": 2 00:09:38.949 } 00:09:38.949 ], 00:09:38.949 "driver_specific": {} 00:09:38.949 } 00:09:38.949 ] 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.949 "name": "Existed_Raid", 00:09:38.949 "uuid": "0cad464b-0206-4677-b520-3503095d4049", 00:09:38.949 "strip_size_kb": 64, 00:09:38.949 "state": "online", 00:09:38.949 "raid_level": "raid0", 00:09:38.949 "superblock": false, 00:09:38.949 "num_base_bdevs": 4, 00:09:38.949 "num_base_bdevs_discovered": 4, 00:09:38.949 "num_base_bdevs_operational": 4, 00:09:38.949 "base_bdevs_list": [ 00:09:38.949 { 00:09:38.949 "name": "NewBaseBdev", 00:09:38.949 "uuid": "dbee3df4-575c-4784-a668-a5ce3259dc46", 00:09:38.949 "is_configured": true, 00:09:38.949 "data_offset": 0, 00:09:38.949 "data_size": 65536 00:09:38.949 }, 00:09:38.949 { 00:09:38.949 "name": "BaseBdev2", 00:09:38.949 "uuid": "cdcccdb0-d991-48db-a56a-a2c680f18e40", 00:09:38.949 "is_configured": true, 00:09:38.949 "data_offset": 0, 00:09:38.949 "data_size": 65536 00:09:38.949 }, 00:09:38.949 { 00:09:38.949 "name": "BaseBdev3", 00:09:38.949 "uuid": "b009abd6-b571-441d-976e-bdd80fead776", 00:09:38.949 "is_configured": true, 00:09:38.949 "data_offset": 0, 00:09:38.949 "data_size": 65536 00:09:38.949 }, 00:09:38.949 { 00:09:38.949 "name": "BaseBdev4", 00:09:38.949 "uuid": "3d979ce0-cfda-40a0-a73b-40b2f7a5bad8", 00:09:38.949 "is_configured": true, 00:09:38.949 "data_offset": 0, 00:09:38.949 "data_size": 65536 00:09:38.949 } 00:09:38.949 ] 00:09:38.949 }' 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.949 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.519 [2024-11-18 23:04:58.732252] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.519 "name": "Existed_Raid", 00:09:39.519 "aliases": [ 00:09:39.519 "0cad464b-0206-4677-b520-3503095d4049" 00:09:39.519 ], 00:09:39.519 "product_name": "Raid Volume", 00:09:39.519 "block_size": 512, 00:09:39.519 "num_blocks": 262144, 00:09:39.519 "uuid": "0cad464b-0206-4677-b520-3503095d4049", 00:09:39.519 "assigned_rate_limits": { 00:09:39.519 "rw_ios_per_sec": 0, 00:09:39.519 "rw_mbytes_per_sec": 0, 00:09:39.519 "r_mbytes_per_sec": 0, 00:09:39.519 "w_mbytes_per_sec": 0 00:09:39.519 }, 00:09:39.519 "claimed": false, 00:09:39.519 "zoned": false, 00:09:39.519 "supported_io_types": { 00:09:39.519 "read": true, 00:09:39.519 "write": true, 00:09:39.519 "unmap": true, 00:09:39.519 "flush": true, 00:09:39.519 "reset": true, 00:09:39.519 "nvme_admin": false, 00:09:39.519 "nvme_io": false, 00:09:39.519 "nvme_io_md": false, 00:09:39.519 "write_zeroes": true, 00:09:39.519 "zcopy": false, 00:09:39.519 "get_zone_info": false, 00:09:39.519 "zone_management": false, 00:09:39.519 "zone_append": false, 00:09:39.519 "compare": false, 00:09:39.519 "compare_and_write": false, 00:09:39.519 "abort": false, 00:09:39.519 "seek_hole": false, 00:09:39.519 "seek_data": false, 00:09:39.519 "copy": false, 00:09:39.519 "nvme_iov_md": false 00:09:39.519 }, 00:09:39.519 "memory_domains": [ 00:09:39.519 { 00:09:39.519 "dma_device_id": "system", 00:09:39.519 "dma_device_type": 1 00:09:39.519 }, 00:09:39.519 { 00:09:39.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.519 "dma_device_type": 2 00:09:39.519 }, 00:09:39.519 { 00:09:39.519 "dma_device_id": "system", 00:09:39.519 "dma_device_type": 1 00:09:39.519 }, 00:09:39.519 { 00:09:39.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.519 "dma_device_type": 2 00:09:39.519 }, 00:09:39.519 { 00:09:39.519 "dma_device_id": "system", 00:09:39.519 "dma_device_type": 1 00:09:39.519 }, 00:09:39.519 { 00:09:39.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.519 "dma_device_type": 2 00:09:39.519 }, 00:09:39.519 { 00:09:39.519 "dma_device_id": "system", 00:09:39.519 "dma_device_type": 1 00:09:39.519 }, 00:09:39.519 { 00:09:39.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.519 "dma_device_type": 2 00:09:39.519 } 00:09:39.519 ], 00:09:39.519 "driver_specific": { 00:09:39.519 "raid": { 00:09:39.519 "uuid": "0cad464b-0206-4677-b520-3503095d4049", 00:09:39.519 "strip_size_kb": 64, 00:09:39.519 "state": "online", 00:09:39.519 "raid_level": "raid0", 00:09:39.519 "superblock": false, 00:09:39.519 "num_base_bdevs": 4, 00:09:39.519 "num_base_bdevs_discovered": 4, 00:09:39.519 "num_base_bdevs_operational": 4, 00:09:39.519 "base_bdevs_list": [ 00:09:39.519 { 00:09:39.519 "name": "NewBaseBdev", 00:09:39.519 "uuid": "dbee3df4-575c-4784-a668-a5ce3259dc46", 00:09:39.519 "is_configured": true, 00:09:39.519 "data_offset": 0, 00:09:39.519 "data_size": 65536 00:09:39.519 }, 00:09:39.519 { 00:09:39.519 "name": "BaseBdev2", 00:09:39.519 "uuid": "cdcccdb0-d991-48db-a56a-a2c680f18e40", 00:09:39.519 "is_configured": true, 00:09:39.519 "data_offset": 0, 00:09:39.519 "data_size": 65536 00:09:39.519 }, 00:09:39.519 { 00:09:39.519 "name": "BaseBdev3", 00:09:39.519 "uuid": "b009abd6-b571-441d-976e-bdd80fead776", 00:09:39.519 "is_configured": true, 00:09:39.519 "data_offset": 0, 00:09:39.519 "data_size": 65536 00:09:39.519 }, 00:09:39.519 { 00:09:39.519 "name": "BaseBdev4", 00:09:39.519 "uuid": "3d979ce0-cfda-40a0-a73b-40b2f7a5bad8", 00:09:39.519 "is_configured": true, 00:09:39.519 "data_offset": 0, 00:09:39.519 "data_size": 65536 00:09:39.519 } 00:09:39.519 ] 00:09:39.519 } 00:09:39.519 } 00:09:39.519 }' 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:39.519 BaseBdev2 00:09:39.519 BaseBdev3 00:09:39.519 BaseBdev4' 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.519 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.779 23:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.779 [2024-11-18 23:04:59.063373] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.779 [2024-11-18 23:04:59.063442] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.779 [2024-11-18 23:04:59.063543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.779 [2024-11-18 23:04:59.063639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.779 [2024-11-18 23:04:59.063694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80318 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80318 ']' 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80318 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80318 00:09:39.779 killing process with pid 80318 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80318' 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80318 00:09:39.779 [2024-11-18 23:04:59.111740] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:39.779 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80318 00:09:39.779 [2024-11-18 23:04:59.152192] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.039 ************************************ 00:09:40.039 END TEST raid_state_function_test 00:09:40.039 ************************************ 00:09:40.039 23:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:40.039 00:09:40.039 real 0m9.482s 00:09:40.039 user 0m16.242s 00:09:40.039 sys 0m1.891s 00:09:40.039 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.039 23:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.299 23:04:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:40.299 23:04:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:40.299 23:04:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.299 23:04:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.299 ************************************ 00:09:40.299 START TEST raid_state_function_test_sb 00:09:40.299 ************************************ 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:40.299 Process raid pid: 80967 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80967 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80967' 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80967 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80967 ']' 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.299 23:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.299 [2024-11-18 23:04:59.560961] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:40.299 [2024-11-18 23:04:59.561173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.559 [2024-11-18 23:04:59.720178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.559 [2024-11-18 23:04:59.765107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.559 [2024-11-18 23:04:59.807226] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.559 [2024-11-18 23:04:59.807348] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.127 [2024-11-18 23:05:00.376744] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.127 [2024-11-18 23:05:00.376864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.127 [2024-11-18 23:05:00.376895] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.127 [2024-11-18 23:05:00.376919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.127 [2024-11-18 23:05:00.376937] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:41.127 [2024-11-18 23:05:00.376961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:41.127 [2024-11-18 23:05:00.376978] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:41.127 [2024-11-18 23:05:00.376989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.127 "name": "Existed_Raid", 00:09:41.127 "uuid": "027c356d-f04a-41b1-bd69-689ab3525183", 00:09:41.127 "strip_size_kb": 64, 00:09:41.127 "state": "configuring", 00:09:41.127 "raid_level": "raid0", 00:09:41.127 "superblock": true, 00:09:41.127 "num_base_bdevs": 4, 00:09:41.127 "num_base_bdevs_discovered": 0, 00:09:41.127 "num_base_bdevs_operational": 4, 00:09:41.127 "base_bdevs_list": [ 00:09:41.127 { 00:09:41.127 "name": "BaseBdev1", 00:09:41.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.127 "is_configured": false, 00:09:41.127 "data_offset": 0, 00:09:41.127 "data_size": 0 00:09:41.127 }, 00:09:41.127 { 00:09:41.127 "name": "BaseBdev2", 00:09:41.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.127 "is_configured": false, 00:09:41.127 "data_offset": 0, 00:09:41.127 "data_size": 0 00:09:41.127 }, 00:09:41.127 { 00:09:41.127 "name": "BaseBdev3", 00:09:41.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.127 "is_configured": false, 00:09:41.127 "data_offset": 0, 00:09:41.127 "data_size": 0 00:09:41.127 }, 00:09:41.127 { 00:09:41.127 "name": "BaseBdev4", 00:09:41.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.127 "is_configured": false, 00:09:41.127 "data_offset": 0, 00:09:41.127 "data_size": 0 00:09:41.127 } 00:09:41.127 ] 00:09:41.127 }' 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.127 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.696 [2024-11-18 23:05:00.827883] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.696 [2024-11-18 23:05:00.827967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.696 [2024-11-18 23:05:00.839907] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.696 [2024-11-18 23:05:00.839983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.696 [2024-11-18 23:05:00.840011] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.696 [2024-11-18 23:05:00.840033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.696 [2024-11-18 23:05:00.840052] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:41.696 [2024-11-18 23:05:00.840073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:41.696 [2024-11-18 23:05:00.840091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:41.696 [2024-11-18 23:05:00.840112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.696 [2024-11-18 23:05:00.860663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.696 BaseBdev1 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.696 [ 00:09:41.696 { 00:09:41.696 "name": "BaseBdev1", 00:09:41.696 "aliases": [ 00:09:41.696 "afb3a297-3670-47cb-a354-5f4b8c0b10df" 00:09:41.696 ], 00:09:41.696 "product_name": "Malloc disk", 00:09:41.696 "block_size": 512, 00:09:41.696 "num_blocks": 65536, 00:09:41.696 "uuid": "afb3a297-3670-47cb-a354-5f4b8c0b10df", 00:09:41.696 "assigned_rate_limits": { 00:09:41.696 "rw_ios_per_sec": 0, 00:09:41.696 "rw_mbytes_per_sec": 0, 00:09:41.696 "r_mbytes_per_sec": 0, 00:09:41.696 "w_mbytes_per_sec": 0 00:09:41.696 }, 00:09:41.696 "claimed": true, 00:09:41.696 "claim_type": "exclusive_write", 00:09:41.696 "zoned": false, 00:09:41.696 "supported_io_types": { 00:09:41.696 "read": true, 00:09:41.696 "write": true, 00:09:41.696 "unmap": true, 00:09:41.696 "flush": true, 00:09:41.696 "reset": true, 00:09:41.696 "nvme_admin": false, 00:09:41.696 "nvme_io": false, 00:09:41.696 "nvme_io_md": false, 00:09:41.696 "write_zeroes": true, 00:09:41.696 "zcopy": true, 00:09:41.696 "get_zone_info": false, 00:09:41.696 "zone_management": false, 00:09:41.696 "zone_append": false, 00:09:41.696 "compare": false, 00:09:41.696 "compare_and_write": false, 00:09:41.696 "abort": true, 00:09:41.696 "seek_hole": false, 00:09:41.696 "seek_data": false, 00:09:41.696 "copy": true, 00:09:41.696 "nvme_iov_md": false 00:09:41.696 }, 00:09:41.696 "memory_domains": [ 00:09:41.696 { 00:09:41.696 "dma_device_id": "system", 00:09:41.696 "dma_device_type": 1 00:09:41.696 }, 00:09:41.696 { 00:09:41.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.696 "dma_device_type": 2 00:09:41.696 } 00:09:41.696 ], 00:09:41.696 "driver_specific": {} 00:09:41.696 } 00:09:41.696 ] 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.696 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.696 "name": "Existed_Raid", 00:09:41.696 "uuid": "52148561-1ad0-434f-a7ee-b65eac016669", 00:09:41.696 "strip_size_kb": 64, 00:09:41.696 "state": "configuring", 00:09:41.696 "raid_level": "raid0", 00:09:41.696 "superblock": true, 00:09:41.696 "num_base_bdevs": 4, 00:09:41.696 "num_base_bdevs_discovered": 1, 00:09:41.696 "num_base_bdevs_operational": 4, 00:09:41.696 "base_bdevs_list": [ 00:09:41.696 { 00:09:41.696 "name": "BaseBdev1", 00:09:41.696 "uuid": "afb3a297-3670-47cb-a354-5f4b8c0b10df", 00:09:41.696 "is_configured": true, 00:09:41.696 "data_offset": 2048, 00:09:41.696 "data_size": 63488 00:09:41.696 }, 00:09:41.696 { 00:09:41.696 "name": "BaseBdev2", 00:09:41.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.696 "is_configured": false, 00:09:41.697 "data_offset": 0, 00:09:41.697 "data_size": 0 00:09:41.697 }, 00:09:41.697 { 00:09:41.697 "name": "BaseBdev3", 00:09:41.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.697 "is_configured": false, 00:09:41.697 "data_offset": 0, 00:09:41.697 "data_size": 0 00:09:41.697 }, 00:09:41.697 { 00:09:41.697 "name": "BaseBdev4", 00:09:41.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.697 "is_configured": false, 00:09:41.697 "data_offset": 0, 00:09:41.697 "data_size": 0 00:09:41.697 } 00:09:41.697 ] 00:09:41.697 }' 00:09:41.697 23:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.697 23:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.955 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.955 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.955 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.214 [2024-11-18 23:05:01.335881] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.214 [2024-11-18 23:05:01.335967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:42.214 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.214 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:42.214 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.214 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.214 [2024-11-18 23:05:01.347897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.214 [2024-11-18 23:05:01.349716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.215 [2024-11-18 23:05:01.349758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.215 [2024-11-18 23:05:01.349768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.215 [2024-11-18 23:05:01.349776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.215 [2024-11-18 23:05:01.349782] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:42.215 [2024-11-18 23:05:01.349790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.215 "name": "Existed_Raid", 00:09:42.215 "uuid": "70e5d197-f501-40d9-a7d6-16e6807afc31", 00:09:42.215 "strip_size_kb": 64, 00:09:42.215 "state": "configuring", 00:09:42.215 "raid_level": "raid0", 00:09:42.215 "superblock": true, 00:09:42.215 "num_base_bdevs": 4, 00:09:42.215 "num_base_bdevs_discovered": 1, 00:09:42.215 "num_base_bdevs_operational": 4, 00:09:42.215 "base_bdevs_list": [ 00:09:42.215 { 00:09:42.215 "name": "BaseBdev1", 00:09:42.215 "uuid": "afb3a297-3670-47cb-a354-5f4b8c0b10df", 00:09:42.215 "is_configured": true, 00:09:42.215 "data_offset": 2048, 00:09:42.215 "data_size": 63488 00:09:42.215 }, 00:09:42.215 { 00:09:42.215 "name": "BaseBdev2", 00:09:42.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.215 "is_configured": false, 00:09:42.215 "data_offset": 0, 00:09:42.215 "data_size": 0 00:09:42.215 }, 00:09:42.215 { 00:09:42.215 "name": "BaseBdev3", 00:09:42.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.215 "is_configured": false, 00:09:42.215 "data_offset": 0, 00:09:42.215 "data_size": 0 00:09:42.215 }, 00:09:42.215 { 00:09:42.215 "name": "BaseBdev4", 00:09:42.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.215 "is_configured": false, 00:09:42.215 "data_offset": 0, 00:09:42.215 "data_size": 0 00:09:42.215 } 00:09:42.215 ] 00:09:42.215 }' 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.215 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.475 [2024-11-18 23:05:01.832366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.475 BaseBdev2 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.475 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.735 [ 00:09:42.735 { 00:09:42.735 "name": "BaseBdev2", 00:09:42.735 "aliases": [ 00:09:42.735 "953cc7a9-035f-4029-ae44-c91394054dc3" 00:09:42.735 ], 00:09:42.735 "product_name": "Malloc disk", 00:09:42.735 "block_size": 512, 00:09:42.735 "num_blocks": 65536, 00:09:42.735 "uuid": "953cc7a9-035f-4029-ae44-c91394054dc3", 00:09:42.735 "assigned_rate_limits": { 00:09:42.735 "rw_ios_per_sec": 0, 00:09:42.735 "rw_mbytes_per_sec": 0, 00:09:42.735 "r_mbytes_per_sec": 0, 00:09:42.735 "w_mbytes_per_sec": 0 00:09:42.735 }, 00:09:42.735 "claimed": true, 00:09:42.735 "claim_type": "exclusive_write", 00:09:42.735 "zoned": false, 00:09:42.735 "supported_io_types": { 00:09:42.735 "read": true, 00:09:42.735 "write": true, 00:09:42.735 "unmap": true, 00:09:42.735 "flush": true, 00:09:42.735 "reset": true, 00:09:42.735 "nvme_admin": false, 00:09:42.735 "nvme_io": false, 00:09:42.735 "nvme_io_md": false, 00:09:42.735 "write_zeroes": true, 00:09:42.735 "zcopy": true, 00:09:42.735 "get_zone_info": false, 00:09:42.735 "zone_management": false, 00:09:42.735 "zone_append": false, 00:09:42.735 "compare": false, 00:09:42.735 "compare_and_write": false, 00:09:42.735 "abort": true, 00:09:42.735 "seek_hole": false, 00:09:42.735 "seek_data": false, 00:09:42.735 "copy": true, 00:09:42.735 "nvme_iov_md": false 00:09:42.735 }, 00:09:42.735 "memory_domains": [ 00:09:42.735 { 00:09:42.735 "dma_device_id": "system", 00:09:42.735 "dma_device_type": 1 00:09:42.735 }, 00:09:42.735 { 00:09:42.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.735 "dma_device_type": 2 00:09:42.735 } 00:09:42.735 ], 00:09:42.735 "driver_specific": {} 00:09:42.735 } 00:09:42.735 ] 00:09:42.735 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.735 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:42.735 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:42.735 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.735 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.735 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.735 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.735 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.736 "name": "Existed_Raid", 00:09:42.736 "uuid": "70e5d197-f501-40d9-a7d6-16e6807afc31", 00:09:42.736 "strip_size_kb": 64, 00:09:42.736 "state": "configuring", 00:09:42.736 "raid_level": "raid0", 00:09:42.736 "superblock": true, 00:09:42.736 "num_base_bdevs": 4, 00:09:42.736 "num_base_bdevs_discovered": 2, 00:09:42.736 "num_base_bdevs_operational": 4, 00:09:42.736 "base_bdevs_list": [ 00:09:42.736 { 00:09:42.736 "name": "BaseBdev1", 00:09:42.736 "uuid": "afb3a297-3670-47cb-a354-5f4b8c0b10df", 00:09:42.736 "is_configured": true, 00:09:42.736 "data_offset": 2048, 00:09:42.736 "data_size": 63488 00:09:42.736 }, 00:09:42.736 { 00:09:42.736 "name": "BaseBdev2", 00:09:42.736 "uuid": "953cc7a9-035f-4029-ae44-c91394054dc3", 00:09:42.736 "is_configured": true, 00:09:42.736 "data_offset": 2048, 00:09:42.736 "data_size": 63488 00:09:42.736 }, 00:09:42.736 { 00:09:42.736 "name": "BaseBdev3", 00:09:42.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.736 "is_configured": false, 00:09:42.736 "data_offset": 0, 00:09:42.736 "data_size": 0 00:09:42.736 }, 00:09:42.736 { 00:09:42.736 "name": "BaseBdev4", 00:09:42.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.736 "is_configured": false, 00:09:42.736 "data_offset": 0, 00:09:42.736 "data_size": 0 00:09:42.736 } 00:09:42.736 ] 00:09:42.736 }' 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.736 23:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.997 [2024-11-18 23:05:02.326335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.997 BaseBdev3 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.997 [ 00:09:42.997 { 00:09:42.997 "name": "BaseBdev3", 00:09:42.997 "aliases": [ 00:09:42.997 "47f39253-99cd-4814-8848-c975a0383e03" 00:09:42.997 ], 00:09:42.997 "product_name": "Malloc disk", 00:09:42.997 "block_size": 512, 00:09:42.997 "num_blocks": 65536, 00:09:42.997 "uuid": "47f39253-99cd-4814-8848-c975a0383e03", 00:09:42.997 "assigned_rate_limits": { 00:09:42.997 "rw_ios_per_sec": 0, 00:09:42.997 "rw_mbytes_per_sec": 0, 00:09:42.997 "r_mbytes_per_sec": 0, 00:09:42.997 "w_mbytes_per_sec": 0 00:09:42.997 }, 00:09:42.997 "claimed": true, 00:09:42.997 "claim_type": "exclusive_write", 00:09:42.997 "zoned": false, 00:09:42.997 "supported_io_types": { 00:09:42.997 "read": true, 00:09:42.997 "write": true, 00:09:42.997 "unmap": true, 00:09:42.997 "flush": true, 00:09:42.997 "reset": true, 00:09:42.997 "nvme_admin": false, 00:09:42.997 "nvme_io": false, 00:09:42.997 "nvme_io_md": false, 00:09:42.997 "write_zeroes": true, 00:09:42.997 "zcopy": true, 00:09:42.997 "get_zone_info": false, 00:09:42.997 "zone_management": false, 00:09:42.997 "zone_append": false, 00:09:42.997 "compare": false, 00:09:42.997 "compare_and_write": false, 00:09:42.997 "abort": true, 00:09:42.997 "seek_hole": false, 00:09:42.997 "seek_data": false, 00:09:42.997 "copy": true, 00:09:42.997 "nvme_iov_md": false 00:09:42.997 }, 00:09:42.997 "memory_domains": [ 00:09:42.997 { 00:09:42.997 "dma_device_id": "system", 00:09:42.997 "dma_device_type": 1 00:09:42.997 }, 00:09:42.997 { 00:09:42.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.997 "dma_device_type": 2 00:09:42.997 } 00:09:42.997 ], 00:09:42.997 "driver_specific": {} 00:09:42.997 } 00:09:42.997 ] 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.997 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.257 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.257 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.257 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.257 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.257 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.257 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.257 "name": "Existed_Raid", 00:09:43.257 "uuid": "70e5d197-f501-40d9-a7d6-16e6807afc31", 00:09:43.257 "strip_size_kb": 64, 00:09:43.257 "state": "configuring", 00:09:43.257 "raid_level": "raid0", 00:09:43.257 "superblock": true, 00:09:43.257 "num_base_bdevs": 4, 00:09:43.257 "num_base_bdevs_discovered": 3, 00:09:43.257 "num_base_bdevs_operational": 4, 00:09:43.257 "base_bdevs_list": [ 00:09:43.257 { 00:09:43.257 "name": "BaseBdev1", 00:09:43.257 "uuid": "afb3a297-3670-47cb-a354-5f4b8c0b10df", 00:09:43.257 "is_configured": true, 00:09:43.257 "data_offset": 2048, 00:09:43.257 "data_size": 63488 00:09:43.257 }, 00:09:43.257 { 00:09:43.257 "name": "BaseBdev2", 00:09:43.257 "uuid": "953cc7a9-035f-4029-ae44-c91394054dc3", 00:09:43.257 "is_configured": true, 00:09:43.257 "data_offset": 2048, 00:09:43.257 "data_size": 63488 00:09:43.257 }, 00:09:43.257 { 00:09:43.257 "name": "BaseBdev3", 00:09:43.257 "uuid": "47f39253-99cd-4814-8848-c975a0383e03", 00:09:43.257 "is_configured": true, 00:09:43.257 "data_offset": 2048, 00:09:43.257 "data_size": 63488 00:09:43.257 }, 00:09:43.257 { 00:09:43.257 "name": "BaseBdev4", 00:09:43.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.257 "is_configured": false, 00:09:43.257 "data_offset": 0, 00:09:43.257 "data_size": 0 00:09:43.257 } 00:09:43.257 ] 00:09:43.257 }' 00:09:43.257 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.257 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.518 [2024-11-18 23:05:02.832492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:43.518 BaseBdev4 00:09:43.518 [2024-11-18 23:05:02.832796] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:43.518 [2024-11-18 23:05:02.832816] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:43.518 [2024-11-18 23:05:02.833087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:43.518 [2024-11-18 23:05:02.833219] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:43.518 [2024-11-18 23:05:02.833237] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:43.518 [2024-11-18 23:05:02.833367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.518 [ 00:09:43.518 { 00:09:43.518 "name": "BaseBdev4", 00:09:43.518 "aliases": [ 00:09:43.518 "6c226f28-678a-4d31-bc5f-6bde266e73de" 00:09:43.518 ], 00:09:43.518 "product_name": "Malloc disk", 00:09:43.518 "block_size": 512, 00:09:43.518 "num_blocks": 65536, 00:09:43.518 "uuid": "6c226f28-678a-4d31-bc5f-6bde266e73de", 00:09:43.518 "assigned_rate_limits": { 00:09:43.518 "rw_ios_per_sec": 0, 00:09:43.518 "rw_mbytes_per_sec": 0, 00:09:43.518 "r_mbytes_per_sec": 0, 00:09:43.518 "w_mbytes_per_sec": 0 00:09:43.518 }, 00:09:43.518 "claimed": true, 00:09:43.518 "claim_type": "exclusive_write", 00:09:43.518 "zoned": false, 00:09:43.518 "supported_io_types": { 00:09:43.518 "read": true, 00:09:43.518 "write": true, 00:09:43.518 "unmap": true, 00:09:43.518 "flush": true, 00:09:43.518 "reset": true, 00:09:43.518 "nvme_admin": false, 00:09:43.518 "nvme_io": false, 00:09:43.518 "nvme_io_md": false, 00:09:43.518 "write_zeroes": true, 00:09:43.518 "zcopy": true, 00:09:43.518 "get_zone_info": false, 00:09:43.518 "zone_management": false, 00:09:43.518 "zone_append": false, 00:09:43.518 "compare": false, 00:09:43.518 "compare_and_write": false, 00:09:43.518 "abort": true, 00:09:43.518 "seek_hole": false, 00:09:43.518 "seek_data": false, 00:09:43.518 "copy": true, 00:09:43.518 "nvme_iov_md": false 00:09:43.518 }, 00:09:43.518 "memory_domains": [ 00:09:43.518 { 00:09:43.518 "dma_device_id": "system", 00:09:43.518 "dma_device_type": 1 00:09:43.518 }, 00:09:43.518 { 00:09:43.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.518 "dma_device_type": 2 00:09:43.518 } 00:09:43.518 ], 00:09:43.518 "driver_specific": {} 00:09:43.518 } 00:09:43.518 ] 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.518 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.519 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.519 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.519 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.519 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.519 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.519 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.519 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.779 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.779 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.779 "name": "Existed_Raid", 00:09:43.779 "uuid": "70e5d197-f501-40d9-a7d6-16e6807afc31", 00:09:43.779 "strip_size_kb": 64, 00:09:43.779 "state": "online", 00:09:43.779 "raid_level": "raid0", 00:09:43.779 "superblock": true, 00:09:43.780 "num_base_bdevs": 4, 00:09:43.780 "num_base_bdevs_discovered": 4, 00:09:43.780 "num_base_bdevs_operational": 4, 00:09:43.780 "base_bdevs_list": [ 00:09:43.780 { 00:09:43.780 "name": "BaseBdev1", 00:09:43.780 "uuid": "afb3a297-3670-47cb-a354-5f4b8c0b10df", 00:09:43.780 "is_configured": true, 00:09:43.780 "data_offset": 2048, 00:09:43.780 "data_size": 63488 00:09:43.780 }, 00:09:43.780 { 00:09:43.780 "name": "BaseBdev2", 00:09:43.780 "uuid": "953cc7a9-035f-4029-ae44-c91394054dc3", 00:09:43.780 "is_configured": true, 00:09:43.780 "data_offset": 2048, 00:09:43.780 "data_size": 63488 00:09:43.780 }, 00:09:43.780 { 00:09:43.780 "name": "BaseBdev3", 00:09:43.780 "uuid": "47f39253-99cd-4814-8848-c975a0383e03", 00:09:43.780 "is_configured": true, 00:09:43.780 "data_offset": 2048, 00:09:43.780 "data_size": 63488 00:09:43.780 }, 00:09:43.780 { 00:09:43.780 "name": "BaseBdev4", 00:09:43.780 "uuid": "6c226f28-678a-4d31-bc5f-6bde266e73de", 00:09:43.780 "is_configured": true, 00:09:43.780 "data_offset": 2048, 00:09:43.780 "data_size": 63488 00:09:43.780 } 00:09:43.780 ] 00:09:43.780 }' 00:09:43.780 23:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.780 23:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.039 [2024-11-18 23:05:03.300064] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.039 "name": "Existed_Raid", 00:09:44.039 "aliases": [ 00:09:44.039 "70e5d197-f501-40d9-a7d6-16e6807afc31" 00:09:44.039 ], 00:09:44.039 "product_name": "Raid Volume", 00:09:44.039 "block_size": 512, 00:09:44.039 "num_blocks": 253952, 00:09:44.039 "uuid": "70e5d197-f501-40d9-a7d6-16e6807afc31", 00:09:44.039 "assigned_rate_limits": { 00:09:44.039 "rw_ios_per_sec": 0, 00:09:44.039 "rw_mbytes_per_sec": 0, 00:09:44.039 "r_mbytes_per_sec": 0, 00:09:44.039 "w_mbytes_per_sec": 0 00:09:44.039 }, 00:09:44.039 "claimed": false, 00:09:44.039 "zoned": false, 00:09:44.039 "supported_io_types": { 00:09:44.039 "read": true, 00:09:44.039 "write": true, 00:09:44.039 "unmap": true, 00:09:44.039 "flush": true, 00:09:44.039 "reset": true, 00:09:44.039 "nvme_admin": false, 00:09:44.039 "nvme_io": false, 00:09:44.039 "nvme_io_md": false, 00:09:44.039 "write_zeroes": true, 00:09:44.039 "zcopy": false, 00:09:44.039 "get_zone_info": false, 00:09:44.039 "zone_management": false, 00:09:44.039 "zone_append": false, 00:09:44.039 "compare": false, 00:09:44.039 "compare_and_write": false, 00:09:44.039 "abort": false, 00:09:44.039 "seek_hole": false, 00:09:44.039 "seek_data": false, 00:09:44.039 "copy": false, 00:09:44.039 "nvme_iov_md": false 00:09:44.039 }, 00:09:44.039 "memory_domains": [ 00:09:44.039 { 00:09:44.039 "dma_device_id": "system", 00:09:44.039 "dma_device_type": 1 00:09:44.039 }, 00:09:44.039 { 00:09:44.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.039 "dma_device_type": 2 00:09:44.039 }, 00:09:44.039 { 00:09:44.039 "dma_device_id": "system", 00:09:44.039 "dma_device_type": 1 00:09:44.039 }, 00:09:44.039 { 00:09:44.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.039 "dma_device_type": 2 00:09:44.039 }, 00:09:44.039 { 00:09:44.039 "dma_device_id": "system", 00:09:44.039 "dma_device_type": 1 00:09:44.039 }, 00:09:44.039 { 00:09:44.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.039 "dma_device_type": 2 00:09:44.039 }, 00:09:44.039 { 00:09:44.039 "dma_device_id": "system", 00:09:44.039 "dma_device_type": 1 00:09:44.039 }, 00:09:44.039 { 00:09:44.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.039 "dma_device_type": 2 00:09:44.039 } 00:09:44.039 ], 00:09:44.039 "driver_specific": { 00:09:44.039 "raid": { 00:09:44.039 "uuid": "70e5d197-f501-40d9-a7d6-16e6807afc31", 00:09:44.039 "strip_size_kb": 64, 00:09:44.039 "state": "online", 00:09:44.039 "raid_level": "raid0", 00:09:44.039 "superblock": true, 00:09:44.039 "num_base_bdevs": 4, 00:09:44.039 "num_base_bdevs_discovered": 4, 00:09:44.039 "num_base_bdevs_operational": 4, 00:09:44.039 "base_bdevs_list": [ 00:09:44.039 { 00:09:44.039 "name": "BaseBdev1", 00:09:44.039 "uuid": "afb3a297-3670-47cb-a354-5f4b8c0b10df", 00:09:44.039 "is_configured": true, 00:09:44.039 "data_offset": 2048, 00:09:44.039 "data_size": 63488 00:09:44.039 }, 00:09:44.039 { 00:09:44.039 "name": "BaseBdev2", 00:09:44.039 "uuid": "953cc7a9-035f-4029-ae44-c91394054dc3", 00:09:44.039 "is_configured": true, 00:09:44.039 "data_offset": 2048, 00:09:44.039 "data_size": 63488 00:09:44.039 }, 00:09:44.039 { 00:09:44.039 "name": "BaseBdev3", 00:09:44.039 "uuid": "47f39253-99cd-4814-8848-c975a0383e03", 00:09:44.039 "is_configured": true, 00:09:44.039 "data_offset": 2048, 00:09:44.039 "data_size": 63488 00:09:44.039 }, 00:09:44.039 { 00:09:44.039 "name": "BaseBdev4", 00:09:44.039 "uuid": "6c226f28-678a-4d31-bc5f-6bde266e73de", 00:09:44.039 "is_configured": true, 00:09:44.039 "data_offset": 2048, 00:09:44.039 "data_size": 63488 00:09:44.039 } 00:09:44.039 ] 00:09:44.039 } 00:09:44.039 } 00:09:44.039 }' 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:44.039 BaseBdev2 00:09:44.039 BaseBdev3 00:09:44.039 BaseBdev4' 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.039 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.040 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.300 [2024-11-18 23:05:03.595301] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.300 [2024-11-18 23:05:03.595331] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.300 [2024-11-18 23:05:03.595373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:44.300 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.301 "name": "Existed_Raid", 00:09:44.301 "uuid": "70e5d197-f501-40d9-a7d6-16e6807afc31", 00:09:44.301 "strip_size_kb": 64, 00:09:44.301 "state": "offline", 00:09:44.301 "raid_level": "raid0", 00:09:44.301 "superblock": true, 00:09:44.301 "num_base_bdevs": 4, 00:09:44.301 "num_base_bdevs_discovered": 3, 00:09:44.301 "num_base_bdevs_operational": 3, 00:09:44.301 "base_bdevs_list": [ 00:09:44.301 { 00:09:44.301 "name": null, 00:09:44.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.301 "is_configured": false, 00:09:44.301 "data_offset": 0, 00:09:44.301 "data_size": 63488 00:09:44.301 }, 00:09:44.301 { 00:09:44.301 "name": "BaseBdev2", 00:09:44.301 "uuid": "953cc7a9-035f-4029-ae44-c91394054dc3", 00:09:44.301 "is_configured": true, 00:09:44.301 "data_offset": 2048, 00:09:44.301 "data_size": 63488 00:09:44.301 }, 00:09:44.301 { 00:09:44.301 "name": "BaseBdev3", 00:09:44.301 "uuid": "47f39253-99cd-4814-8848-c975a0383e03", 00:09:44.301 "is_configured": true, 00:09:44.301 "data_offset": 2048, 00:09:44.301 "data_size": 63488 00:09:44.301 }, 00:09:44.301 { 00:09:44.301 "name": "BaseBdev4", 00:09:44.301 "uuid": "6c226f28-678a-4d31-bc5f-6bde266e73de", 00:09:44.301 "is_configured": true, 00:09:44.301 "data_offset": 2048, 00:09:44.301 "data_size": 63488 00:09:44.301 } 00:09:44.301 ] 00:09:44.301 }' 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.301 23:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.871 [2024-11-18 23:05:04.081758] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.871 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.872 [2024-11-18 23:05:04.152904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.872 [2024-11-18 23:05:04.223982] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:44.872 [2024-11-18 23:05:04.224071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.872 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.132 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.132 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:45.132 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:45.132 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:45.132 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:45.132 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.132 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.132 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.132 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.132 BaseBdev2 00:09:45.132 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.132 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.133 [ 00:09:45.133 { 00:09:45.133 "name": "BaseBdev2", 00:09:45.133 "aliases": [ 00:09:45.133 "544ba129-96e8-42ca-8f67-5f2efd05ff3b" 00:09:45.133 ], 00:09:45.133 "product_name": "Malloc disk", 00:09:45.133 "block_size": 512, 00:09:45.133 "num_blocks": 65536, 00:09:45.133 "uuid": "544ba129-96e8-42ca-8f67-5f2efd05ff3b", 00:09:45.133 "assigned_rate_limits": { 00:09:45.133 "rw_ios_per_sec": 0, 00:09:45.133 "rw_mbytes_per_sec": 0, 00:09:45.133 "r_mbytes_per_sec": 0, 00:09:45.133 "w_mbytes_per_sec": 0 00:09:45.133 }, 00:09:45.133 "claimed": false, 00:09:45.133 "zoned": false, 00:09:45.133 "supported_io_types": { 00:09:45.133 "read": true, 00:09:45.133 "write": true, 00:09:45.133 "unmap": true, 00:09:45.133 "flush": true, 00:09:45.133 "reset": true, 00:09:45.133 "nvme_admin": false, 00:09:45.133 "nvme_io": false, 00:09:45.133 "nvme_io_md": false, 00:09:45.133 "write_zeroes": true, 00:09:45.133 "zcopy": true, 00:09:45.133 "get_zone_info": false, 00:09:45.133 "zone_management": false, 00:09:45.133 "zone_append": false, 00:09:45.133 "compare": false, 00:09:45.133 "compare_and_write": false, 00:09:45.133 "abort": true, 00:09:45.133 "seek_hole": false, 00:09:45.133 "seek_data": false, 00:09:45.133 "copy": true, 00:09:45.133 "nvme_iov_md": false 00:09:45.133 }, 00:09:45.133 "memory_domains": [ 00:09:45.133 { 00:09:45.133 "dma_device_id": "system", 00:09:45.133 "dma_device_type": 1 00:09:45.133 }, 00:09:45.133 { 00:09:45.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.133 "dma_device_type": 2 00:09:45.133 } 00:09:45.133 ], 00:09:45.133 "driver_specific": {} 00:09:45.133 } 00:09:45.133 ] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.133 BaseBdev3 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.133 [ 00:09:45.133 { 00:09:45.133 "name": "BaseBdev3", 00:09:45.133 "aliases": [ 00:09:45.133 "46e33c7d-357f-4ec3-82c5-d3d95f0f6f0d" 00:09:45.133 ], 00:09:45.133 "product_name": "Malloc disk", 00:09:45.133 "block_size": 512, 00:09:45.133 "num_blocks": 65536, 00:09:45.133 "uuid": "46e33c7d-357f-4ec3-82c5-d3d95f0f6f0d", 00:09:45.133 "assigned_rate_limits": { 00:09:45.133 "rw_ios_per_sec": 0, 00:09:45.133 "rw_mbytes_per_sec": 0, 00:09:45.133 "r_mbytes_per_sec": 0, 00:09:45.133 "w_mbytes_per_sec": 0 00:09:45.133 }, 00:09:45.133 "claimed": false, 00:09:45.133 "zoned": false, 00:09:45.133 "supported_io_types": { 00:09:45.133 "read": true, 00:09:45.133 "write": true, 00:09:45.133 "unmap": true, 00:09:45.133 "flush": true, 00:09:45.133 "reset": true, 00:09:45.133 "nvme_admin": false, 00:09:45.133 "nvme_io": false, 00:09:45.133 "nvme_io_md": false, 00:09:45.133 "write_zeroes": true, 00:09:45.133 "zcopy": true, 00:09:45.133 "get_zone_info": false, 00:09:45.133 "zone_management": false, 00:09:45.133 "zone_append": false, 00:09:45.133 "compare": false, 00:09:45.133 "compare_and_write": false, 00:09:45.133 "abort": true, 00:09:45.133 "seek_hole": false, 00:09:45.133 "seek_data": false, 00:09:45.133 "copy": true, 00:09:45.133 "nvme_iov_md": false 00:09:45.133 }, 00:09:45.133 "memory_domains": [ 00:09:45.133 { 00:09:45.133 "dma_device_id": "system", 00:09:45.133 "dma_device_type": 1 00:09:45.133 }, 00:09:45.133 { 00:09:45.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.133 "dma_device_type": 2 00:09:45.133 } 00:09:45.133 ], 00:09:45.133 "driver_specific": {} 00:09:45.133 } 00:09:45.133 ] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.133 BaseBdev4 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.133 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.133 [ 00:09:45.133 { 00:09:45.133 "name": "BaseBdev4", 00:09:45.133 "aliases": [ 00:09:45.133 "7cc20316-5bf9-4800-b500-dabb24843c03" 00:09:45.133 ], 00:09:45.133 "product_name": "Malloc disk", 00:09:45.133 "block_size": 512, 00:09:45.133 "num_blocks": 65536, 00:09:45.133 "uuid": "7cc20316-5bf9-4800-b500-dabb24843c03", 00:09:45.133 "assigned_rate_limits": { 00:09:45.133 "rw_ios_per_sec": 0, 00:09:45.133 "rw_mbytes_per_sec": 0, 00:09:45.133 "r_mbytes_per_sec": 0, 00:09:45.133 "w_mbytes_per_sec": 0 00:09:45.133 }, 00:09:45.133 "claimed": false, 00:09:45.133 "zoned": false, 00:09:45.133 "supported_io_types": { 00:09:45.133 "read": true, 00:09:45.133 "write": true, 00:09:45.133 "unmap": true, 00:09:45.133 "flush": true, 00:09:45.133 "reset": true, 00:09:45.133 "nvme_admin": false, 00:09:45.133 "nvme_io": false, 00:09:45.133 "nvme_io_md": false, 00:09:45.133 "write_zeroes": true, 00:09:45.134 "zcopy": true, 00:09:45.134 "get_zone_info": false, 00:09:45.134 "zone_management": false, 00:09:45.134 "zone_append": false, 00:09:45.134 "compare": false, 00:09:45.134 "compare_and_write": false, 00:09:45.134 "abort": true, 00:09:45.134 "seek_hole": false, 00:09:45.134 "seek_data": false, 00:09:45.134 "copy": true, 00:09:45.134 "nvme_iov_md": false 00:09:45.134 }, 00:09:45.134 "memory_domains": [ 00:09:45.134 { 00:09:45.134 "dma_device_id": "system", 00:09:45.134 "dma_device_type": 1 00:09:45.134 }, 00:09:45.134 { 00:09:45.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.134 "dma_device_type": 2 00:09:45.134 } 00:09:45.134 ], 00:09:45.134 "driver_specific": {} 00:09:45.134 } 00:09:45.134 ] 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.134 [2024-11-18 23:05:04.455527] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.134 [2024-11-18 23:05:04.455610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.134 [2024-11-18 23:05:04.455650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.134 [2024-11-18 23:05:04.457503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.134 [2024-11-18 23:05:04.457606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.134 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.393 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.393 "name": "Existed_Raid", 00:09:45.393 "uuid": "4949766d-8c31-4113-91ef-ffef50bd43bd", 00:09:45.393 "strip_size_kb": 64, 00:09:45.393 "state": "configuring", 00:09:45.393 "raid_level": "raid0", 00:09:45.393 "superblock": true, 00:09:45.393 "num_base_bdevs": 4, 00:09:45.393 "num_base_bdevs_discovered": 3, 00:09:45.393 "num_base_bdevs_operational": 4, 00:09:45.393 "base_bdevs_list": [ 00:09:45.393 { 00:09:45.393 "name": "BaseBdev1", 00:09:45.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.393 "is_configured": false, 00:09:45.393 "data_offset": 0, 00:09:45.393 "data_size": 0 00:09:45.393 }, 00:09:45.393 { 00:09:45.393 "name": "BaseBdev2", 00:09:45.393 "uuid": "544ba129-96e8-42ca-8f67-5f2efd05ff3b", 00:09:45.393 "is_configured": true, 00:09:45.393 "data_offset": 2048, 00:09:45.393 "data_size": 63488 00:09:45.393 }, 00:09:45.393 { 00:09:45.393 "name": "BaseBdev3", 00:09:45.393 "uuid": "46e33c7d-357f-4ec3-82c5-d3d95f0f6f0d", 00:09:45.393 "is_configured": true, 00:09:45.393 "data_offset": 2048, 00:09:45.393 "data_size": 63488 00:09:45.393 }, 00:09:45.393 { 00:09:45.393 "name": "BaseBdev4", 00:09:45.393 "uuid": "7cc20316-5bf9-4800-b500-dabb24843c03", 00:09:45.393 "is_configured": true, 00:09:45.393 "data_offset": 2048, 00:09:45.393 "data_size": 63488 00:09:45.393 } 00:09:45.393 ] 00:09:45.393 }' 00:09:45.393 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.393 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.657 [2024-11-18 23:05:04.938708] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.657 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.658 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.658 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.658 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.658 "name": "Existed_Raid", 00:09:45.658 "uuid": "4949766d-8c31-4113-91ef-ffef50bd43bd", 00:09:45.658 "strip_size_kb": 64, 00:09:45.658 "state": "configuring", 00:09:45.658 "raid_level": "raid0", 00:09:45.658 "superblock": true, 00:09:45.658 "num_base_bdevs": 4, 00:09:45.658 "num_base_bdevs_discovered": 2, 00:09:45.658 "num_base_bdevs_operational": 4, 00:09:45.658 "base_bdevs_list": [ 00:09:45.658 { 00:09:45.658 "name": "BaseBdev1", 00:09:45.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.658 "is_configured": false, 00:09:45.658 "data_offset": 0, 00:09:45.658 "data_size": 0 00:09:45.658 }, 00:09:45.658 { 00:09:45.658 "name": null, 00:09:45.658 "uuid": "544ba129-96e8-42ca-8f67-5f2efd05ff3b", 00:09:45.658 "is_configured": false, 00:09:45.658 "data_offset": 0, 00:09:45.658 "data_size": 63488 00:09:45.658 }, 00:09:45.658 { 00:09:45.658 "name": "BaseBdev3", 00:09:45.658 "uuid": "46e33c7d-357f-4ec3-82c5-d3d95f0f6f0d", 00:09:45.658 "is_configured": true, 00:09:45.658 "data_offset": 2048, 00:09:45.658 "data_size": 63488 00:09:45.658 }, 00:09:45.658 { 00:09:45.658 "name": "BaseBdev4", 00:09:45.658 "uuid": "7cc20316-5bf9-4800-b500-dabb24843c03", 00:09:45.658 "is_configured": true, 00:09:45.658 "data_offset": 2048, 00:09:45.658 "data_size": 63488 00:09:45.658 } 00:09:45.658 ] 00:09:45.658 }' 00:09:45.658 23:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.658 23:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.233 [2024-11-18 23:05:05.368910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.233 BaseBdev1 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.233 [ 00:09:46.233 { 00:09:46.233 "name": "BaseBdev1", 00:09:46.233 "aliases": [ 00:09:46.233 "a35c7e67-c3a8-4cb1-badc-b10c892aacc1" 00:09:46.233 ], 00:09:46.233 "product_name": "Malloc disk", 00:09:46.233 "block_size": 512, 00:09:46.233 "num_blocks": 65536, 00:09:46.233 "uuid": "a35c7e67-c3a8-4cb1-badc-b10c892aacc1", 00:09:46.233 "assigned_rate_limits": { 00:09:46.233 "rw_ios_per_sec": 0, 00:09:46.233 "rw_mbytes_per_sec": 0, 00:09:46.233 "r_mbytes_per_sec": 0, 00:09:46.233 "w_mbytes_per_sec": 0 00:09:46.233 }, 00:09:46.233 "claimed": true, 00:09:46.233 "claim_type": "exclusive_write", 00:09:46.233 "zoned": false, 00:09:46.233 "supported_io_types": { 00:09:46.233 "read": true, 00:09:46.233 "write": true, 00:09:46.233 "unmap": true, 00:09:46.233 "flush": true, 00:09:46.233 "reset": true, 00:09:46.233 "nvme_admin": false, 00:09:46.233 "nvme_io": false, 00:09:46.233 "nvme_io_md": false, 00:09:46.233 "write_zeroes": true, 00:09:46.233 "zcopy": true, 00:09:46.233 "get_zone_info": false, 00:09:46.233 "zone_management": false, 00:09:46.233 "zone_append": false, 00:09:46.233 "compare": false, 00:09:46.233 "compare_and_write": false, 00:09:46.233 "abort": true, 00:09:46.233 "seek_hole": false, 00:09:46.233 "seek_data": false, 00:09:46.233 "copy": true, 00:09:46.233 "nvme_iov_md": false 00:09:46.233 }, 00:09:46.233 "memory_domains": [ 00:09:46.233 { 00:09:46.233 "dma_device_id": "system", 00:09:46.233 "dma_device_type": 1 00:09:46.233 }, 00:09:46.233 { 00:09:46.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.233 "dma_device_type": 2 00:09:46.233 } 00:09:46.233 ], 00:09:46.233 "driver_specific": {} 00:09:46.233 } 00:09:46.233 ] 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.233 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.233 "name": "Existed_Raid", 00:09:46.233 "uuid": "4949766d-8c31-4113-91ef-ffef50bd43bd", 00:09:46.233 "strip_size_kb": 64, 00:09:46.233 "state": "configuring", 00:09:46.233 "raid_level": "raid0", 00:09:46.233 "superblock": true, 00:09:46.233 "num_base_bdevs": 4, 00:09:46.233 "num_base_bdevs_discovered": 3, 00:09:46.233 "num_base_bdevs_operational": 4, 00:09:46.233 "base_bdevs_list": [ 00:09:46.233 { 00:09:46.233 "name": "BaseBdev1", 00:09:46.233 "uuid": "a35c7e67-c3a8-4cb1-badc-b10c892aacc1", 00:09:46.233 "is_configured": true, 00:09:46.233 "data_offset": 2048, 00:09:46.233 "data_size": 63488 00:09:46.233 }, 00:09:46.233 { 00:09:46.233 "name": null, 00:09:46.233 "uuid": "544ba129-96e8-42ca-8f67-5f2efd05ff3b", 00:09:46.233 "is_configured": false, 00:09:46.233 "data_offset": 0, 00:09:46.233 "data_size": 63488 00:09:46.233 }, 00:09:46.233 { 00:09:46.233 "name": "BaseBdev3", 00:09:46.233 "uuid": "46e33c7d-357f-4ec3-82c5-d3d95f0f6f0d", 00:09:46.233 "is_configured": true, 00:09:46.233 "data_offset": 2048, 00:09:46.233 "data_size": 63488 00:09:46.233 }, 00:09:46.233 { 00:09:46.233 "name": "BaseBdev4", 00:09:46.233 "uuid": "7cc20316-5bf9-4800-b500-dabb24843c03", 00:09:46.233 "is_configured": true, 00:09:46.234 "data_offset": 2048, 00:09:46.234 "data_size": 63488 00:09:46.234 } 00:09:46.234 ] 00:09:46.234 }' 00:09:46.234 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.234 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.493 [2024-11-18 23:05:05.836160] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.493 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.494 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.494 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.494 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.494 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.751 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.751 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.751 "name": "Existed_Raid", 00:09:46.751 "uuid": "4949766d-8c31-4113-91ef-ffef50bd43bd", 00:09:46.751 "strip_size_kb": 64, 00:09:46.751 "state": "configuring", 00:09:46.751 "raid_level": "raid0", 00:09:46.751 "superblock": true, 00:09:46.751 "num_base_bdevs": 4, 00:09:46.751 "num_base_bdevs_discovered": 2, 00:09:46.751 "num_base_bdevs_operational": 4, 00:09:46.751 "base_bdevs_list": [ 00:09:46.751 { 00:09:46.751 "name": "BaseBdev1", 00:09:46.751 "uuid": "a35c7e67-c3a8-4cb1-badc-b10c892aacc1", 00:09:46.751 "is_configured": true, 00:09:46.751 "data_offset": 2048, 00:09:46.751 "data_size": 63488 00:09:46.751 }, 00:09:46.751 { 00:09:46.751 "name": null, 00:09:46.751 "uuid": "544ba129-96e8-42ca-8f67-5f2efd05ff3b", 00:09:46.751 "is_configured": false, 00:09:46.751 "data_offset": 0, 00:09:46.751 "data_size": 63488 00:09:46.751 }, 00:09:46.751 { 00:09:46.751 "name": null, 00:09:46.751 "uuid": "46e33c7d-357f-4ec3-82c5-d3d95f0f6f0d", 00:09:46.751 "is_configured": false, 00:09:46.751 "data_offset": 0, 00:09:46.751 "data_size": 63488 00:09:46.751 }, 00:09:46.751 { 00:09:46.751 "name": "BaseBdev4", 00:09:46.751 "uuid": "7cc20316-5bf9-4800-b500-dabb24843c03", 00:09:46.751 "is_configured": true, 00:09:46.751 "data_offset": 2048, 00:09:46.751 "data_size": 63488 00:09:46.751 } 00:09:46.751 ] 00:09:46.751 }' 00:09:46.751 23:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.751 23:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.010 [2024-11-18 23:05:06.275437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.010 "name": "Existed_Raid", 00:09:47.010 "uuid": "4949766d-8c31-4113-91ef-ffef50bd43bd", 00:09:47.010 "strip_size_kb": 64, 00:09:47.010 "state": "configuring", 00:09:47.010 "raid_level": "raid0", 00:09:47.010 "superblock": true, 00:09:47.010 "num_base_bdevs": 4, 00:09:47.010 "num_base_bdevs_discovered": 3, 00:09:47.010 "num_base_bdevs_operational": 4, 00:09:47.010 "base_bdevs_list": [ 00:09:47.010 { 00:09:47.010 "name": "BaseBdev1", 00:09:47.010 "uuid": "a35c7e67-c3a8-4cb1-badc-b10c892aacc1", 00:09:47.010 "is_configured": true, 00:09:47.010 "data_offset": 2048, 00:09:47.010 "data_size": 63488 00:09:47.010 }, 00:09:47.010 { 00:09:47.010 "name": null, 00:09:47.010 "uuid": "544ba129-96e8-42ca-8f67-5f2efd05ff3b", 00:09:47.010 "is_configured": false, 00:09:47.010 "data_offset": 0, 00:09:47.010 "data_size": 63488 00:09:47.010 }, 00:09:47.010 { 00:09:47.010 "name": "BaseBdev3", 00:09:47.010 "uuid": "46e33c7d-357f-4ec3-82c5-d3d95f0f6f0d", 00:09:47.010 "is_configured": true, 00:09:47.010 "data_offset": 2048, 00:09:47.010 "data_size": 63488 00:09:47.010 }, 00:09:47.010 { 00:09:47.010 "name": "BaseBdev4", 00:09:47.010 "uuid": "7cc20316-5bf9-4800-b500-dabb24843c03", 00:09:47.010 "is_configured": true, 00:09:47.010 "data_offset": 2048, 00:09:47.010 "data_size": 63488 00:09:47.010 } 00:09:47.010 ] 00:09:47.010 }' 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.010 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.579 [2024-11-18 23:05:06.750710] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.579 "name": "Existed_Raid", 00:09:47.579 "uuid": "4949766d-8c31-4113-91ef-ffef50bd43bd", 00:09:47.579 "strip_size_kb": 64, 00:09:47.579 "state": "configuring", 00:09:47.579 "raid_level": "raid0", 00:09:47.579 "superblock": true, 00:09:47.579 "num_base_bdevs": 4, 00:09:47.579 "num_base_bdevs_discovered": 2, 00:09:47.579 "num_base_bdevs_operational": 4, 00:09:47.579 "base_bdevs_list": [ 00:09:47.579 { 00:09:47.579 "name": null, 00:09:47.579 "uuid": "a35c7e67-c3a8-4cb1-badc-b10c892aacc1", 00:09:47.579 "is_configured": false, 00:09:47.579 "data_offset": 0, 00:09:47.579 "data_size": 63488 00:09:47.579 }, 00:09:47.579 { 00:09:47.579 "name": null, 00:09:47.579 "uuid": "544ba129-96e8-42ca-8f67-5f2efd05ff3b", 00:09:47.579 "is_configured": false, 00:09:47.579 "data_offset": 0, 00:09:47.579 "data_size": 63488 00:09:47.579 }, 00:09:47.579 { 00:09:47.579 "name": "BaseBdev3", 00:09:47.579 "uuid": "46e33c7d-357f-4ec3-82c5-d3d95f0f6f0d", 00:09:47.579 "is_configured": true, 00:09:47.579 "data_offset": 2048, 00:09:47.579 "data_size": 63488 00:09:47.579 }, 00:09:47.579 { 00:09:47.579 "name": "BaseBdev4", 00:09:47.579 "uuid": "7cc20316-5bf9-4800-b500-dabb24843c03", 00:09:47.579 "is_configured": true, 00:09:47.579 "data_offset": 2048, 00:09:47.579 "data_size": 63488 00:09:47.579 } 00:09:47.579 ] 00:09:47.579 }' 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.579 23:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.840 [2024-11-18 23:05:07.200398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.840 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.100 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.100 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.100 "name": "Existed_Raid", 00:09:48.100 "uuid": "4949766d-8c31-4113-91ef-ffef50bd43bd", 00:09:48.100 "strip_size_kb": 64, 00:09:48.100 "state": "configuring", 00:09:48.100 "raid_level": "raid0", 00:09:48.100 "superblock": true, 00:09:48.100 "num_base_bdevs": 4, 00:09:48.100 "num_base_bdevs_discovered": 3, 00:09:48.100 "num_base_bdevs_operational": 4, 00:09:48.100 "base_bdevs_list": [ 00:09:48.100 { 00:09:48.100 "name": null, 00:09:48.100 "uuid": "a35c7e67-c3a8-4cb1-badc-b10c892aacc1", 00:09:48.100 "is_configured": false, 00:09:48.100 "data_offset": 0, 00:09:48.100 "data_size": 63488 00:09:48.100 }, 00:09:48.100 { 00:09:48.100 "name": "BaseBdev2", 00:09:48.100 "uuid": "544ba129-96e8-42ca-8f67-5f2efd05ff3b", 00:09:48.100 "is_configured": true, 00:09:48.100 "data_offset": 2048, 00:09:48.100 "data_size": 63488 00:09:48.100 }, 00:09:48.100 { 00:09:48.100 "name": "BaseBdev3", 00:09:48.100 "uuid": "46e33c7d-357f-4ec3-82c5-d3d95f0f6f0d", 00:09:48.100 "is_configured": true, 00:09:48.100 "data_offset": 2048, 00:09:48.100 "data_size": 63488 00:09:48.100 }, 00:09:48.100 { 00:09:48.100 "name": "BaseBdev4", 00:09:48.100 "uuid": "7cc20316-5bf9-4800-b500-dabb24843c03", 00:09:48.100 "is_configured": true, 00:09:48.100 "data_offset": 2048, 00:09:48.100 "data_size": 63488 00:09:48.100 } 00:09:48.100 ] 00:09:48.100 }' 00:09:48.100 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.100 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.358 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.358 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.358 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.358 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.358 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.359 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:48.359 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.359 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:48.359 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.359 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.359 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.359 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a35c7e67-c3a8-4cb1-badc-b10c892aacc1 00:09:48.359 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.359 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.616 [2024-11-18 23:05:07.746383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:48.616 NewBaseBdev 00:09:48.616 [2024-11-18 23:05:07.746623] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:48.616 [2024-11-18 23:05:07.746639] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:48.616 [2024-11-18 23:05:07.746886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:48.616 [2024-11-18 23:05:07.746994] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:48.616 [2024-11-18 23:05:07.747006] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:48.616 [2024-11-18 23:05:07.747097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.616 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.616 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:48.616 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:48.616 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.616 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:48.616 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.616 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.616 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.617 [ 00:09:48.617 { 00:09:48.617 "name": "NewBaseBdev", 00:09:48.617 "aliases": [ 00:09:48.617 "a35c7e67-c3a8-4cb1-badc-b10c892aacc1" 00:09:48.617 ], 00:09:48.617 "product_name": "Malloc disk", 00:09:48.617 "block_size": 512, 00:09:48.617 "num_blocks": 65536, 00:09:48.617 "uuid": "a35c7e67-c3a8-4cb1-badc-b10c892aacc1", 00:09:48.617 "assigned_rate_limits": { 00:09:48.617 "rw_ios_per_sec": 0, 00:09:48.617 "rw_mbytes_per_sec": 0, 00:09:48.617 "r_mbytes_per_sec": 0, 00:09:48.617 "w_mbytes_per_sec": 0 00:09:48.617 }, 00:09:48.617 "claimed": true, 00:09:48.617 "claim_type": "exclusive_write", 00:09:48.617 "zoned": false, 00:09:48.617 "supported_io_types": { 00:09:48.617 "read": true, 00:09:48.617 "write": true, 00:09:48.617 "unmap": true, 00:09:48.617 "flush": true, 00:09:48.617 "reset": true, 00:09:48.617 "nvme_admin": false, 00:09:48.617 "nvme_io": false, 00:09:48.617 "nvme_io_md": false, 00:09:48.617 "write_zeroes": true, 00:09:48.617 "zcopy": true, 00:09:48.617 "get_zone_info": false, 00:09:48.617 "zone_management": false, 00:09:48.617 "zone_append": false, 00:09:48.617 "compare": false, 00:09:48.617 "compare_and_write": false, 00:09:48.617 "abort": true, 00:09:48.617 "seek_hole": false, 00:09:48.617 "seek_data": false, 00:09:48.617 "copy": true, 00:09:48.617 "nvme_iov_md": false 00:09:48.617 }, 00:09:48.617 "memory_domains": [ 00:09:48.617 { 00:09:48.617 "dma_device_id": "system", 00:09:48.617 "dma_device_type": 1 00:09:48.617 }, 00:09:48.617 { 00:09:48.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.617 "dma_device_type": 2 00:09:48.617 } 00:09:48.617 ], 00:09:48.617 "driver_specific": {} 00:09:48.617 } 00:09:48.617 ] 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.617 "name": "Existed_Raid", 00:09:48.617 "uuid": "4949766d-8c31-4113-91ef-ffef50bd43bd", 00:09:48.617 "strip_size_kb": 64, 00:09:48.617 "state": "online", 00:09:48.617 "raid_level": "raid0", 00:09:48.617 "superblock": true, 00:09:48.617 "num_base_bdevs": 4, 00:09:48.617 "num_base_bdevs_discovered": 4, 00:09:48.617 "num_base_bdevs_operational": 4, 00:09:48.617 "base_bdevs_list": [ 00:09:48.617 { 00:09:48.617 "name": "NewBaseBdev", 00:09:48.617 "uuid": "a35c7e67-c3a8-4cb1-badc-b10c892aacc1", 00:09:48.617 "is_configured": true, 00:09:48.617 "data_offset": 2048, 00:09:48.617 "data_size": 63488 00:09:48.617 }, 00:09:48.617 { 00:09:48.617 "name": "BaseBdev2", 00:09:48.617 "uuid": "544ba129-96e8-42ca-8f67-5f2efd05ff3b", 00:09:48.617 "is_configured": true, 00:09:48.617 "data_offset": 2048, 00:09:48.617 "data_size": 63488 00:09:48.617 }, 00:09:48.617 { 00:09:48.617 "name": "BaseBdev3", 00:09:48.617 "uuid": "46e33c7d-357f-4ec3-82c5-d3d95f0f6f0d", 00:09:48.617 "is_configured": true, 00:09:48.617 "data_offset": 2048, 00:09:48.617 "data_size": 63488 00:09:48.617 }, 00:09:48.617 { 00:09:48.617 "name": "BaseBdev4", 00:09:48.617 "uuid": "7cc20316-5bf9-4800-b500-dabb24843c03", 00:09:48.617 "is_configured": true, 00:09:48.617 "data_offset": 2048, 00:09:48.617 "data_size": 63488 00:09:48.617 } 00:09:48.617 ] 00:09:48.617 }' 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.617 23:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.876 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:48.876 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:48.876 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.876 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.876 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.876 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.876 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:48.876 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.876 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.876 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.876 [2024-11-18 23:05:08.229898] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.138 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.138 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.138 "name": "Existed_Raid", 00:09:49.138 "aliases": [ 00:09:49.138 "4949766d-8c31-4113-91ef-ffef50bd43bd" 00:09:49.138 ], 00:09:49.138 "product_name": "Raid Volume", 00:09:49.138 "block_size": 512, 00:09:49.138 "num_blocks": 253952, 00:09:49.138 "uuid": "4949766d-8c31-4113-91ef-ffef50bd43bd", 00:09:49.138 "assigned_rate_limits": { 00:09:49.138 "rw_ios_per_sec": 0, 00:09:49.138 "rw_mbytes_per_sec": 0, 00:09:49.138 "r_mbytes_per_sec": 0, 00:09:49.138 "w_mbytes_per_sec": 0 00:09:49.138 }, 00:09:49.138 "claimed": false, 00:09:49.138 "zoned": false, 00:09:49.138 "supported_io_types": { 00:09:49.138 "read": true, 00:09:49.138 "write": true, 00:09:49.138 "unmap": true, 00:09:49.138 "flush": true, 00:09:49.138 "reset": true, 00:09:49.138 "nvme_admin": false, 00:09:49.138 "nvme_io": false, 00:09:49.138 "nvme_io_md": false, 00:09:49.138 "write_zeroes": true, 00:09:49.138 "zcopy": false, 00:09:49.138 "get_zone_info": false, 00:09:49.138 "zone_management": false, 00:09:49.138 "zone_append": false, 00:09:49.138 "compare": false, 00:09:49.138 "compare_and_write": false, 00:09:49.138 "abort": false, 00:09:49.138 "seek_hole": false, 00:09:49.138 "seek_data": false, 00:09:49.138 "copy": false, 00:09:49.138 "nvme_iov_md": false 00:09:49.138 }, 00:09:49.138 "memory_domains": [ 00:09:49.138 { 00:09:49.139 "dma_device_id": "system", 00:09:49.139 "dma_device_type": 1 00:09:49.139 }, 00:09:49.139 { 00:09:49.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.139 "dma_device_type": 2 00:09:49.139 }, 00:09:49.139 { 00:09:49.139 "dma_device_id": "system", 00:09:49.139 "dma_device_type": 1 00:09:49.139 }, 00:09:49.139 { 00:09:49.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.139 "dma_device_type": 2 00:09:49.139 }, 00:09:49.139 { 00:09:49.139 "dma_device_id": "system", 00:09:49.139 "dma_device_type": 1 00:09:49.139 }, 00:09:49.139 { 00:09:49.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.139 "dma_device_type": 2 00:09:49.139 }, 00:09:49.139 { 00:09:49.139 "dma_device_id": "system", 00:09:49.139 "dma_device_type": 1 00:09:49.139 }, 00:09:49.139 { 00:09:49.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.139 "dma_device_type": 2 00:09:49.139 } 00:09:49.139 ], 00:09:49.139 "driver_specific": { 00:09:49.139 "raid": { 00:09:49.139 "uuid": "4949766d-8c31-4113-91ef-ffef50bd43bd", 00:09:49.139 "strip_size_kb": 64, 00:09:49.139 "state": "online", 00:09:49.139 "raid_level": "raid0", 00:09:49.139 "superblock": true, 00:09:49.139 "num_base_bdevs": 4, 00:09:49.139 "num_base_bdevs_discovered": 4, 00:09:49.139 "num_base_bdevs_operational": 4, 00:09:49.139 "base_bdevs_list": [ 00:09:49.139 { 00:09:49.139 "name": "NewBaseBdev", 00:09:49.139 "uuid": "a35c7e67-c3a8-4cb1-badc-b10c892aacc1", 00:09:49.139 "is_configured": true, 00:09:49.139 "data_offset": 2048, 00:09:49.139 "data_size": 63488 00:09:49.139 }, 00:09:49.139 { 00:09:49.139 "name": "BaseBdev2", 00:09:49.139 "uuid": "544ba129-96e8-42ca-8f67-5f2efd05ff3b", 00:09:49.139 "is_configured": true, 00:09:49.139 "data_offset": 2048, 00:09:49.139 "data_size": 63488 00:09:49.139 }, 00:09:49.139 { 00:09:49.139 "name": "BaseBdev3", 00:09:49.139 "uuid": "46e33c7d-357f-4ec3-82c5-d3d95f0f6f0d", 00:09:49.139 "is_configured": true, 00:09:49.139 "data_offset": 2048, 00:09:49.139 "data_size": 63488 00:09:49.139 }, 00:09:49.139 { 00:09:49.139 "name": "BaseBdev4", 00:09:49.139 "uuid": "7cc20316-5bf9-4800-b500-dabb24843c03", 00:09:49.139 "is_configured": true, 00:09:49.139 "data_offset": 2048, 00:09:49.139 "data_size": 63488 00:09:49.139 } 00:09:49.139 ] 00:09:49.139 } 00:09:49.139 } 00:09:49.139 }' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:49.139 BaseBdev2 00:09:49.139 BaseBdev3 00:09:49.139 BaseBdev4' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.139 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.400 [2024-11-18 23:05:08.549064] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.400 [2024-11-18 23:05:08.549137] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.400 [2024-11-18 23:05:08.549212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.400 [2024-11-18 23:05:08.549275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.400 [2024-11-18 23:05:08.549300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80967 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80967 ']' 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80967 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80967 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80967' 00:09:49.400 killing process with pid 80967 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80967 00:09:49.400 [2024-11-18 23:05:08.589269] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.400 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80967 00:09:49.400 [2024-11-18 23:05:08.630119] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.660 23:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:49.660 00:09:49.660 real 0m9.404s 00:09:49.660 user 0m16.042s 00:09:49.660 sys 0m1.968s 00:09:49.660 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.660 23:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.660 ************************************ 00:09:49.660 END TEST raid_state_function_test_sb 00:09:49.660 ************************************ 00:09:49.660 23:05:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:49.660 23:05:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:49.660 23:05:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.660 23:05:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.660 ************************************ 00:09:49.660 START TEST raid_superblock_test 00:09:49.660 ************************************ 00:09:49.660 23:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:09:49.660 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:49.660 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:49.660 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:49.660 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:49.660 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:49.660 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81611 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81611 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81611 ']' 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.661 23:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.661 [2024-11-18 23:05:09.036076] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:49.661 [2024-11-18 23:05:09.036314] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81611 ] 00:09:49.921 [2024-11-18 23:05:09.197205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.921 [2024-11-18 23:05:09.241258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.921 [2024-11-18 23:05:09.282858] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.921 [2024-11-18 23:05:09.282970] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.488 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.749 malloc1 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.749 [2024-11-18 23:05:09.869181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:50.749 [2024-11-18 23:05:09.869271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.749 [2024-11-18 23:05:09.869310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:50.749 [2024-11-18 23:05:09.869325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.749 [2024-11-18 23:05:09.871504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.749 [2024-11-18 23:05:09.871545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:50.749 pt1 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.749 malloc2 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.749 [2024-11-18 23:05:09.912572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.749 [2024-11-18 23:05:09.912772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.749 [2024-11-18 23:05:09.912853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:50.749 [2024-11-18 23:05:09.912935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.749 [2024-11-18 23:05:09.917789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.749 [2024-11-18 23:05:09.917942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.749 pt2 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.749 malloc3 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.749 [2024-11-18 23:05:09.947319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:50.749 [2024-11-18 23:05:09.947418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.749 [2024-11-18 23:05:09.947451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:50.749 [2024-11-18 23:05:09.947480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.749 [2024-11-18 23:05:09.949508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.749 [2024-11-18 23:05:09.949576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:50.749 pt3 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.749 malloc4 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.749 [2024-11-18 23:05:09.979710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:50.749 [2024-11-18 23:05:09.979760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.749 [2024-11-18 23:05:09.979775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:50.749 [2024-11-18 23:05:09.979788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.749 [2024-11-18 23:05:09.981918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.749 [2024-11-18 23:05:09.981955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:50.749 pt4 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.749 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.749 [2024-11-18 23:05:09.991761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:50.749 [2024-11-18 23:05:09.993592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.749 [2024-11-18 23:05:09.993696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:50.749 [2024-11-18 23:05:09.993766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:50.749 [2024-11-18 23:05:09.993932] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:50.749 [2024-11-18 23:05:09.993946] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:50.749 [2024-11-18 23:05:09.994185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:50.749 [2024-11-18 23:05:09.994329] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:50.750 [2024-11-18 23:05:09.994340] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:50.750 [2024-11-18 23:05:09.994466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.750 23:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.750 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:50.750 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.750 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.750 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.750 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.750 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.750 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.750 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.750 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.750 23:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.750 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.750 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.750 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.750 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.750 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.750 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.750 "name": "raid_bdev1", 00:09:50.750 "uuid": "f193b8df-65b2-4890-8696-2a9d89229bed", 00:09:50.750 "strip_size_kb": 64, 00:09:50.750 "state": "online", 00:09:50.750 "raid_level": "raid0", 00:09:50.750 "superblock": true, 00:09:50.750 "num_base_bdevs": 4, 00:09:50.750 "num_base_bdevs_discovered": 4, 00:09:50.750 "num_base_bdevs_operational": 4, 00:09:50.750 "base_bdevs_list": [ 00:09:50.750 { 00:09:50.750 "name": "pt1", 00:09:50.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.750 "is_configured": true, 00:09:50.750 "data_offset": 2048, 00:09:50.750 "data_size": 63488 00:09:50.750 }, 00:09:50.750 { 00:09:50.750 "name": "pt2", 00:09:50.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.750 "is_configured": true, 00:09:50.750 "data_offset": 2048, 00:09:50.750 "data_size": 63488 00:09:50.750 }, 00:09:50.750 { 00:09:50.750 "name": "pt3", 00:09:50.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.750 "is_configured": true, 00:09:50.750 "data_offset": 2048, 00:09:50.750 "data_size": 63488 00:09:50.750 }, 00:09:50.750 { 00:09:50.750 "name": "pt4", 00:09:50.750 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:50.750 "is_configured": true, 00:09:50.750 "data_offset": 2048, 00:09:50.750 "data_size": 63488 00:09:50.750 } 00:09:50.750 ] 00:09:50.750 }' 00:09:50.750 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.750 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.320 [2024-11-18 23:05:10.427425] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.320 "name": "raid_bdev1", 00:09:51.320 "aliases": [ 00:09:51.320 "f193b8df-65b2-4890-8696-2a9d89229bed" 00:09:51.320 ], 00:09:51.320 "product_name": "Raid Volume", 00:09:51.320 "block_size": 512, 00:09:51.320 "num_blocks": 253952, 00:09:51.320 "uuid": "f193b8df-65b2-4890-8696-2a9d89229bed", 00:09:51.320 "assigned_rate_limits": { 00:09:51.320 "rw_ios_per_sec": 0, 00:09:51.320 "rw_mbytes_per_sec": 0, 00:09:51.320 "r_mbytes_per_sec": 0, 00:09:51.320 "w_mbytes_per_sec": 0 00:09:51.320 }, 00:09:51.320 "claimed": false, 00:09:51.320 "zoned": false, 00:09:51.320 "supported_io_types": { 00:09:51.320 "read": true, 00:09:51.320 "write": true, 00:09:51.320 "unmap": true, 00:09:51.320 "flush": true, 00:09:51.320 "reset": true, 00:09:51.320 "nvme_admin": false, 00:09:51.320 "nvme_io": false, 00:09:51.320 "nvme_io_md": false, 00:09:51.320 "write_zeroes": true, 00:09:51.320 "zcopy": false, 00:09:51.320 "get_zone_info": false, 00:09:51.320 "zone_management": false, 00:09:51.320 "zone_append": false, 00:09:51.320 "compare": false, 00:09:51.320 "compare_and_write": false, 00:09:51.320 "abort": false, 00:09:51.320 "seek_hole": false, 00:09:51.320 "seek_data": false, 00:09:51.320 "copy": false, 00:09:51.320 "nvme_iov_md": false 00:09:51.320 }, 00:09:51.320 "memory_domains": [ 00:09:51.320 { 00:09:51.320 "dma_device_id": "system", 00:09:51.320 "dma_device_type": 1 00:09:51.320 }, 00:09:51.320 { 00:09:51.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.320 "dma_device_type": 2 00:09:51.320 }, 00:09:51.320 { 00:09:51.320 "dma_device_id": "system", 00:09:51.320 "dma_device_type": 1 00:09:51.320 }, 00:09:51.320 { 00:09:51.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.320 "dma_device_type": 2 00:09:51.320 }, 00:09:51.320 { 00:09:51.320 "dma_device_id": "system", 00:09:51.320 "dma_device_type": 1 00:09:51.320 }, 00:09:51.320 { 00:09:51.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.320 "dma_device_type": 2 00:09:51.320 }, 00:09:51.320 { 00:09:51.320 "dma_device_id": "system", 00:09:51.320 "dma_device_type": 1 00:09:51.320 }, 00:09:51.320 { 00:09:51.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.320 "dma_device_type": 2 00:09:51.320 } 00:09:51.320 ], 00:09:51.320 "driver_specific": { 00:09:51.320 "raid": { 00:09:51.320 "uuid": "f193b8df-65b2-4890-8696-2a9d89229bed", 00:09:51.320 "strip_size_kb": 64, 00:09:51.320 "state": "online", 00:09:51.320 "raid_level": "raid0", 00:09:51.320 "superblock": true, 00:09:51.320 "num_base_bdevs": 4, 00:09:51.320 "num_base_bdevs_discovered": 4, 00:09:51.320 "num_base_bdevs_operational": 4, 00:09:51.320 "base_bdevs_list": [ 00:09:51.320 { 00:09:51.320 "name": "pt1", 00:09:51.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.320 "is_configured": true, 00:09:51.320 "data_offset": 2048, 00:09:51.320 "data_size": 63488 00:09:51.320 }, 00:09:51.320 { 00:09:51.320 "name": "pt2", 00:09:51.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.320 "is_configured": true, 00:09:51.320 "data_offset": 2048, 00:09:51.320 "data_size": 63488 00:09:51.320 }, 00:09:51.320 { 00:09:51.320 "name": "pt3", 00:09:51.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.320 "is_configured": true, 00:09:51.320 "data_offset": 2048, 00:09:51.320 "data_size": 63488 00:09:51.320 }, 00:09:51.320 { 00:09:51.320 "name": "pt4", 00:09:51.320 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:51.320 "is_configured": true, 00:09:51.320 "data_offset": 2048, 00:09:51.320 "data_size": 63488 00:09:51.320 } 00:09:51.320 ] 00:09:51.320 } 00:09:51.320 } 00:09:51.320 }' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:51.320 pt2 00:09:51.320 pt3 00:09:51.320 pt4' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.320 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.589 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.589 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.589 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.589 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.589 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:51.590 [2024-11-18 23:05:10.726859] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f193b8df-65b2-4890-8696-2a9d89229bed 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f193b8df-65b2-4890-8696-2a9d89229bed ']' 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.590 [2024-11-18 23:05:10.778465] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.590 [2024-11-18 23:05:10.778494] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.590 [2024-11-18 23:05:10.778563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.590 [2024-11-18 23:05:10.778634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.590 [2024-11-18 23:05:10.778644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.590 [2024-11-18 23:05:10.922264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:51.590 [2024-11-18 23:05:10.924072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:51.590 [2024-11-18 23:05:10.924114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:51.590 [2024-11-18 23:05:10.924141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:51.590 [2024-11-18 23:05:10.924184] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:51.590 [2024-11-18 23:05:10.924236] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:51.590 [2024-11-18 23:05:10.924255] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:51.590 [2024-11-18 23:05:10.924271] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:51.590 [2024-11-18 23:05:10.924296] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.590 [2024-11-18 23:05:10.924305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:51.590 request: 00:09:51.590 { 00:09:51.590 "name": "raid_bdev1", 00:09:51.590 "raid_level": "raid0", 00:09:51.590 "base_bdevs": [ 00:09:51.590 "malloc1", 00:09:51.590 "malloc2", 00:09:51.590 "malloc3", 00:09:51.590 "malloc4" 00:09:51.590 ], 00:09:51.590 "strip_size_kb": 64, 00:09:51.590 "superblock": false, 00:09:51.590 "method": "bdev_raid_create", 00:09:51.590 "req_id": 1 00:09:51.590 } 00:09:51.590 Got JSON-RPC error response 00:09:51.590 response: 00:09:51.590 { 00:09:51.590 "code": -17, 00:09:51.590 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:51.590 } 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:51.590 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.851 [2024-11-18 23:05:10.990102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:51.851 [2024-11-18 23:05:10.990186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.851 [2024-11-18 23:05:10.990222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:51.851 [2024-11-18 23:05:10.990250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.851 [2024-11-18 23:05:10.992432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.851 [2024-11-18 23:05:10.992498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:51.851 [2024-11-18 23:05:10.992584] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:51.851 [2024-11-18 23:05:10.992640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:51.851 pt1 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.851 23:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.851 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.851 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.851 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.851 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.851 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.851 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.851 "name": "raid_bdev1", 00:09:51.851 "uuid": "f193b8df-65b2-4890-8696-2a9d89229bed", 00:09:51.851 "strip_size_kb": 64, 00:09:51.851 "state": "configuring", 00:09:51.851 "raid_level": "raid0", 00:09:51.851 "superblock": true, 00:09:51.851 "num_base_bdevs": 4, 00:09:51.851 "num_base_bdevs_discovered": 1, 00:09:51.851 "num_base_bdevs_operational": 4, 00:09:51.851 "base_bdevs_list": [ 00:09:51.851 { 00:09:51.851 "name": "pt1", 00:09:51.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.851 "is_configured": true, 00:09:51.851 "data_offset": 2048, 00:09:51.851 "data_size": 63488 00:09:51.851 }, 00:09:51.851 { 00:09:51.851 "name": null, 00:09:51.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.852 "is_configured": false, 00:09:51.852 "data_offset": 2048, 00:09:51.852 "data_size": 63488 00:09:51.852 }, 00:09:51.852 { 00:09:51.852 "name": null, 00:09:51.852 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.852 "is_configured": false, 00:09:51.852 "data_offset": 2048, 00:09:51.852 "data_size": 63488 00:09:51.852 }, 00:09:51.852 { 00:09:51.852 "name": null, 00:09:51.852 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:51.852 "is_configured": false, 00:09:51.852 "data_offset": 2048, 00:09:51.852 "data_size": 63488 00:09:51.852 } 00:09:51.852 ] 00:09:51.852 }' 00:09:51.852 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.852 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.112 [2024-11-18 23:05:11.373451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:52.112 [2024-11-18 23:05:11.373558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.112 [2024-11-18 23:05:11.373595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:52.112 [2024-11-18 23:05:11.373644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.112 [2024-11-18 23:05:11.374023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.112 [2024-11-18 23:05:11.374041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:52.112 [2024-11-18 23:05:11.374109] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:52.112 [2024-11-18 23:05:11.374127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.112 pt2 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.112 [2024-11-18 23:05:11.385441] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.112 "name": "raid_bdev1", 00:09:52.112 "uuid": "f193b8df-65b2-4890-8696-2a9d89229bed", 00:09:52.112 "strip_size_kb": 64, 00:09:52.112 "state": "configuring", 00:09:52.112 "raid_level": "raid0", 00:09:52.112 "superblock": true, 00:09:52.112 "num_base_bdevs": 4, 00:09:52.112 "num_base_bdevs_discovered": 1, 00:09:52.112 "num_base_bdevs_operational": 4, 00:09:52.112 "base_bdevs_list": [ 00:09:52.112 { 00:09:52.112 "name": "pt1", 00:09:52.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.112 "is_configured": true, 00:09:52.112 "data_offset": 2048, 00:09:52.112 "data_size": 63488 00:09:52.112 }, 00:09:52.112 { 00:09:52.112 "name": null, 00:09:52.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.112 "is_configured": false, 00:09:52.112 "data_offset": 0, 00:09:52.112 "data_size": 63488 00:09:52.112 }, 00:09:52.112 { 00:09:52.112 "name": null, 00:09:52.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.112 "is_configured": false, 00:09:52.112 "data_offset": 2048, 00:09:52.112 "data_size": 63488 00:09:52.112 }, 00:09:52.112 { 00:09:52.112 "name": null, 00:09:52.112 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:52.112 "is_configured": false, 00:09:52.112 "data_offset": 2048, 00:09:52.112 "data_size": 63488 00:09:52.112 } 00:09:52.112 ] 00:09:52.112 }' 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.112 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.681 [2024-11-18 23:05:11.796712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:52.681 [2024-11-18 23:05:11.796810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.681 [2024-11-18 23:05:11.796854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:52.681 [2024-11-18 23:05:11.796886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.681 [2024-11-18 23:05:11.797309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.681 [2024-11-18 23:05:11.797366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:52.681 [2024-11-18 23:05:11.797458] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:52.681 [2024-11-18 23:05:11.797507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.681 pt2 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.681 [2024-11-18 23:05:11.808675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:52.681 [2024-11-18 23:05:11.808786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.681 [2024-11-18 23:05:11.808819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:52.681 [2024-11-18 23:05:11.808869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.681 [2024-11-18 23:05:11.809192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.681 [2024-11-18 23:05:11.809248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:52.681 [2024-11-18 23:05:11.809333] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:52.681 [2024-11-18 23:05:11.809380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:52.681 pt3 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.681 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.681 [2024-11-18 23:05:11.820657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:52.681 [2024-11-18 23:05:11.820721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.681 [2024-11-18 23:05:11.820736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:52.681 [2024-11-18 23:05:11.820746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.681 [2024-11-18 23:05:11.821027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.681 [2024-11-18 23:05:11.821043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:52.681 [2024-11-18 23:05:11.821088] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:52.681 [2024-11-18 23:05:11.821106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:52.681 [2024-11-18 23:05:11.821204] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:52.681 [2024-11-18 23:05:11.821216] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:52.682 [2024-11-18 23:05:11.821441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:52.682 [2024-11-18 23:05:11.821568] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:52.682 [2024-11-18 23:05:11.821577] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:52.682 [2024-11-18 23:05:11.821666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.682 pt4 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.682 "name": "raid_bdev1", 00:09:52.682 "uuid": "f193b8df-65b2-4890-8696-2a9d89229bed", 00:09:52.682 "strip_size_kb": 64, 00:09:52.682 "state": "online", 00:09:52.682 "raid_level": "raid0", 00:09:52.682 "superblock": true, 00:09:52.682 "num_base_bdevs": 4, 00:09:52.682 "num_base_bdevs_discovered": 4, 00:09:52.682 "num_base_bdevs_operational": 4, 00:09:52.682 "base_bdevs_list": [ 00:09:52.682 { 00:09:52.682 "name": "pt1", 00:09:52.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.682 "is_configured": true, 00:09:52.682 "data_offset": 2048, 00:09:52.682 "data_size": 63488 00:09:52.682 }, 00:09:52.682 { 00:09:52.682 "name": "pt2", 00:09:52.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.682 "is_configured": true, 00:09:52.682 "data_offset": 2048, 00:09:52.682 "data_size": 63488 00:09:52.682 }, 00:09:52.682 { 00:09:52.682 "name": "pt3", 00:09:52.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.682 "is_configured": true, 00:09:52.682 "data_offset": 2048, 00:09:52.682 "data_size": 63488 00:09:52.682 }, 00:09:52.682 { 00:09:52.682 "name": "pt4", 00:09:52.682 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:52.682 "is_configured": true, 00:09:52.682 "data_offset": 2048, 00:09:52.682 "data_size": 63488 00:09:52.682 } 00:09:52.682 ] 00:09:52.682 }' 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.682 23:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.941 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.942 [2024-11-18 23:05:12.220297] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:52.942 "name": "raid_bdev1", 00:09:52.942 "aliases": [ 00:09:52.942 "f193b8df-65b2-4890-8696-2a9d89229bed" 00:09:52.942 ], 00:09:52.942 "product_name": "Raid Volume", 00:09:52.942 "block_size": 512, 00:09:52.942 "num_blocks": 253952, 00:09:52.942 "uuid": "f193b8df-65b2-4890-8696-2a9d89229bed", 00:09:52.942 "assigned_rate_limits": { 00:09:52.942 "rw_ios_per_sec": 0, 00:09:52.942 "rw_mbytes_per_sec": 0, 00:09:52.942 "r_mbytes_per_sec": 0, 00:09:52.942 "w_mbytes_per_sec": 0 00:09:52.942 }, 00:09:52.942 "claimed": false, 00:09:52.942 "zoned": false, 00:09:52.942 "supported_io_types": { 00:09:52.942 "read": true, 00:09:52.942 "write": true, 00:09:52.942 "unmap": true, 00:09:52.942 "flush": true, 00:09:52.942 "reset": true, 00:09:52.942 "nvme_admin": false, 00:09:52.942 "nvme_io": false, 00:09:52.942 "nvme_io_md": false, 00:09:52.942 "write_zeroes": true, 00:09:52.942 "zcopy": false, 00:09:52.942 "get_zone_info": false, 00:09:52.942 "zone_management": false, 00:09:52.942 "zone_append": false, 00:09:52.942 "compare": false, 00:09:52.942 "compare_and_write": false, 00:09:52.942 "abort": false, 00:09:52.942 "seek_hole": false, 00:09:52.942 "seek_data": false, 00:09:52.942 "copy": false, 00:09:52.942 "nvme_iov_md": false 00:09:52.942 }, 00:09:52.942 "memory_domains": [ 00:09:52.942 { 00:09:52.942 "dma_device_id": "system", 00:09:52.942 "dma_device_type": 1 00:09:52.942 }, 00:09:52.942 { 00:09:52.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.942 "dma_device_type": 2 00:09:52.942 }, 00:09:52.942 { 00:09:52.942 "dma_device_id": "system", 00:09:52.942 "dma_device_type": 1 00:09:52.942 }, 00:09:52.942 { 00:09:52.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.942 "dma_device_type": 2 00:09:52.942 }, 00:09:52.942 { 00:09:52.942 "dma_device_id": "system", 00:09:52.942 "dma_device_type": 1 00:09:52.942 }, 00:09:52.942 { 00:09:52.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.942 "dma_device_type": 2 00:09:52.942 }, 00:09:52.942 { 00:09:52.942 "dma_device_id": "system", 00:09:52.942 "dma_device_type": 1 00:09:52.942 }, 00:09:52.942 { 00:09:52.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.942 "dma_device_type": 2 00:09:52.942 } 00:09:52.942 ], 00:09:52.942 "driver_specific": { 00:09:52.942 "raid": { 00:09:52.942 "uuid": "f193b8df-65b2-4890-8696-2a9d89229bed", 00:09:52.942 "strip_size_kb": 64, 00:09:52.942 "state": "online", 00:09:52.942 "raid_level": "raid0", 00:09:52.942 "superblock": true, 00:09:52.942 "num_base_bdevs": 4, 00:09:52.942 "num_base_bdevs_discovered": 4, 00:09:52.942 "num_base_bdevs_operational": 4, 00:09:52.942 "base_bdevs_list": [ 00:09:52.942 { 00:09:52.942 "name": "pt1", 00:09:52.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.942 "is_configured": true, 00:09:52.942 "data_offset": 2048, 00:09:52.942 "data_size": 63488 00:09:52.942 }, 00:09:52.942 { 00:09:52.942 "name": "pt2", 00:09:52.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.942 "is_configured": true, 00:09:52.942 "data_offset": 2048, 00:09:52.942 "data_size": 63488 00:09:52.942 }, 00:09:52.942 { 00:09:52.942 "name": "pt3", 00:09:52.942 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.942 "is_configured": true, 00:09:52.942 "data_offset": 2048, 00:09:52.942 "data_size": 63488 00:09:52.942 }, 00:09:52.942 { 00:09:52.942 "name": "pt4", 00:09:52.942 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:52.942 "is_configured": true, 00:09:52.942 "data_offset": 2048, 00:09:52.942 "data_size": 63488 00:09:52.942 } 00:09:52.942 ] 00:09:52.942 } 00:09:52.942 } 00:09:52.942 }' 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:52.942 pt2 00:09:52.942 pt3 00:09:52.942 pt4' 00:09:52.942 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.201 [2024-11-18 23:05:12.551684] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f193b8df-65b2-4890-8696-2a9d89229bed '!=' f193b8df-65b2-4890-8696-2a9d89229bed ']' 00:09:53.201 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81611 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81611 ']' 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81611 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81611 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81611' 00:09:53.461 killing process with pid 81611 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81611 00:09:53.461 [2024-11-18 23:05:12.620012] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.461 [2024-11-18 23:05:12.620141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.461 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81611 00:09:53.461 [2024-11-18 23:05:12.620237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.461 [2024-11-18 23:05:12.620252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:53.461 [2024-11-18 23:05:12.663609] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.721 23:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:53.721 00:09:53.721 real 0m3.960s 00:09:53.721 user 0m6.204s 00:09:53.721 sys 0m0.869s 00:09:53.721 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.721 23:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.721 ************************************ 00:09:53.721 END TEST raid_superblock_test 00:09:53.721 ************************************ 00:09:53.721 23:05:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:53.721 23:05:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:53.721 23:05:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.721 23:05:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.721 ************************************ 00:09:53.721 START TEST raid_read_error_test 00:09:53.721 ************************************ 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rEcB53zSIZ 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81858 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:53.721 23:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81858 00:09:53.721 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81858 ']' 00:09:53.721 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.721 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.721 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.721 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.721 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.721 [2024-11-18 23:05:13.078795] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:53.721 [2024-11-18 23:05:13.078919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81858 ] 00:09:53.980 [2024-11-18 23:05:13.241370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.980 [2024-11-18 23:05:13.285655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.980 [2024-11-18 23:05:13.327366] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.980 [2024-11-18 23:05:13.327407] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.564 BaseBdev1_malloc 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.564 true 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.564 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.564 [2024-11-18 23:05:13.929534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:54.564 [2024-11-18 23:05:13.929588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.564 [2024-11-18 23:05:13.929607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:54.564 [2024-11-18 23:05:13.929616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.858 [2024-11-18 23:05:13.931827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.858 [2024-11-18 23:05:13.931923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:54.858 BaseBdev1 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.858 BaseBdev2_malloc 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.858 true 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.858 [2024-11-18 23:05:13.980247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:54.858 [2024-11-18 23:05:13.980308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.858 [2024-11-18 23:05:13.980328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:54.858 [2024-11-18 23:05:13.980336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.858 [2024-11-18 23:05:13.982391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.858 [2024-11-18 23:05:13.982462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:54.858 BaseBdev2 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.858 23:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.858 BaseBdev3_malloc 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.858 true 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.858 [2024-11-18 23:05:14.020699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:54.858 [2024-11-18 23:05:14.020797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.858 [2024-11-18 23:05:14.020819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:54.858 [2024-11-18 23:05:14.020828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.858 [2024-11-18 23:05:14.022906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.858 [2024-11-18 23:05:14.022940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:54.858 BaseBdev3 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.858 BaseBdev4_malloc 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.858 true 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.858 [2024-11-18 23:05:14.061096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:54.858 [2024-11-18 23:05:14.061140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.858 [2024-11-18 23:05:14.061160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:54.858 [2024-11-18 23:05:14.061168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.858 [2024-11-18 23:05:14.063242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.858 [2024-11-18 23:05:14.063275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:54.858 BaseBdev4 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.858 [2024-11-18 23:05:14.073123] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.858 [2024-11-18 23:05:14.074930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.858 [2024-11-18 23:05:14.075021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.858 [2024-11-18 23:05:14.075069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:54.858 [2024-11-18 23:05:14.075288] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:54.858 [2024-11-18 23:05:14.075300] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:54.858 [2024-11-18 23:05:14.075594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:54.858 [2024-11-18 23:05:14.075805] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:54.858 [2024-11-18 23:05:14.075858] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:54.858 [2024-11-18 23:05:14.076054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.858 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.859 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.859 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.859 "name": "raid_bdev1", 00:09:54.859 "uuid": "0372082a-dd7f-424b-8371-cc496c2eb442", 00:09:54.859 "strip_size_kb": 64, 00:09:54.859 "state": "online", 00:09:54.859 "raid_level": "raid0", 00:09:54.859 "superblock": true, 00:09:54.859 "num_base_bdevs": 4, 00:09:54.859 "num_base_bdevs_discovered": 4, 00:09:54.859 "num_base_bdevs_operational": 4, 00:09:54.859 "base_bdevs_list": [ 00:09:54.859 { 00:09:54.859 "name": "BaseBdev1", 00:09:54.859 "uuid": "11ff69f7-723c-5da3-91ed-9c8ceeb14a21", 00:09:54.859 "is_configured": true, 00:09:54.859 "data_offset": 2048, 00:09:54.859 "data_size": 63488 00:09:54.859 }, 00:09:54.859 { 00:09:54.859 "name": "BaseBdev2", 00:09:54.859 "uuid": "bee81bdc-a420-546a-a65d-f3a4ced9f52f", 00:09:54.859 "is_configured": true, 00:09:54.859 "data_offset": 2048, 00:09:54.859 "data_size": 63488 00:09:54.859 }, 00:09:54.859 { 00:09:54.859 "name": "BaseBdev3", 00:09:54.859 "uuid": "3f2a4230-bef0-5193-b277-b2bc7c178fb1", 00:09:54.859 "is_configured": true, 00:09:54.859 "data_offset": 2048, 00:09:54.859 "data_size": 63488 00:09:54.859 }, 00:09:54.859 { 00:09:54.859 "name": "BaseBdev4", 00:09:54.859 "uuid": "490a0282-c1d0-5d32-b7bd-f0a4222871b0", 00:09:54.859 "is_configured": true, 00:09:54.859 "data_offset": 2048, 00:09:54.859 "data_size": 63488 00:09:54.859 } 00:09:54.859 ] 00:09:54.859 }' 00:09:54.859 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.859 23:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.429 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:55.429 23:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:55.429 [2024-11-18 23:05:14.600558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.372 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.372 "name": "raid_bdev1", 00:09:56.373 "uuid": "0372082a-dd7f-424b-8371-cc496c2eb442", 00:09:56.373 "strip_size_kb": 64, 00:09:56.373 "state": "online", 00:09:56.373 "raid_level": "raid0", 00:09:56.373 "superblock": true, 00:09:56.373 "num_base_bdevs": 4, 00:09:56.373 "num_base_bdevs_discovered": 4, 00:09:56.373 "num_base_bdevs_operational": 4, 00:09:56.373 "base_bdevs_list": [ 00:09:56.373 { 00:09:56.373 "name": "BaseBdev1", 00:09:56.373 "uuid": "11ff69f7-723c-5da3-91ed-9c8ceeb14a21", 00:09:56.373 "is_configured": true, 00:09:56.373 "data_offset": 2048, 00:09:56.373 "data_size": 63488 00:09:56.373 }, 00:09:56.373 { 00:09:56.373 "name": "BaseBdev2", 00:09:56.373 "uuid": "bee81bdc-a420-546a-a65d-f3a4ced9f52f", 00:09:56.373 "is_configured": true, 00:09:56.373 "data_offset": 2048, 00:09:56.373 "data_size": 63488 00:09:56.373 }, 00:09:56.373 { 00:09:56.373 "name": "BaseBdev3", 00:09:56.373 "uuid": "3f2a4230-bef0-5193-b277-b2bc7c178fb1", 00:09:56.373 "is_configured": true, 00:09:56.373 "data_offset": 2048, 00:09:56.373 "data_size": 63488 00:09:56.373 }, 00:09:56.373 { 00:09:56.373 "name": "BaseBdev4", 00:09:56.373 "uuid": "490a0282-c1d0-5d32-b7bd-f0a4222871b0", 00:09:56.373 "is_configured": true, 00:09:56.373 "data_offset": 2048, 00:09:56.373 "data_size": 63488 00:09:56.373 } 00:09:56.373 ] 00:09:56.373 }' 00:09:56.373 23:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.373 23:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.632 23:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:56.632 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.906 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.907 [2024-11-18 23:05:16.016217] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.907 [2024-11-18 23:05:16.016332] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.907 [2024-11-18 23:05:16.018812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.907 [2024-11-18 23:05:16.018915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.907 [2024-11-18 23:05:16.018980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.907 [2024-11-18 23:05:16.019023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:56.907 { 00:09:56.907 "results": [ 00:09:56.907 { 00:09:56.907 "job": "raid_bdev1", 00:09:56.907 "core_mask": "0x1", 00:09:56.907 "workload": "randrw", 00:09:56.907 "percentage": 50, 00:09:56.907 "status": "finished", 00:09:56.907 "queue_depth": 1, 00:09:56.907 "io_size": 131072, 00:09:56.907 "runtime": 1.416645, 00:09:56.907 "iops": 17260.499278224255, 00:09:56.907 "mibps": 2157.562409778032, 00:09:56.907 "io_failed": 1, 00:09:56.907 "io_timeout": 0, 00:09:56.907 "avg_latency_us": 80.40432470310658, 00:09:56.907 "min_latency_us": 24.370305676855896, 00:09:56.907 "max_latency_us": 1366.5257641921398 00:09:56.907 } 00:09:56.907 ], 00:09:56.907 "core_count": 1 00:09:56.907 } 00:09:56.907 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.907 23:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81858 00:09:56.907 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81858 ']' 00:09:56.907 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81858 00:09:56.907 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:56.907 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.907 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81858 00:09:56.907 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:56.907 killing process with pid 81858 00:09:56.907 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:56.907 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81858' 00:09:56.907 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81858 00:09:56.908 [2024-11-18 23:05:16.065944] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.908 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81858 00:09:56.908 [2024-11-18 23:05:16.101249] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.171 23:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rEcB53zSIZ 00:09:57.171 23:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.171 23:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.171 ************************************ 00:09:57.171 END TEST raid_read_error_test 00:09:57.171 ************************************ 00:09:57.171 23:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:57.171 23:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:57.171 23:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.171 23:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.171 23:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:57.171 00:09:57.171 real 0m3.368s 00:09:57.171 user 0m4.238s 00:09:57.171 sys 0m0.540s 00:09:57.171 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.171 23:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.171 23:05:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:57.171 23:05:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:57.171 23:05:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.171 23:05:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.171 ************************************ 00:09:57.171 START TEST raid_write_error_test 00:09:57.171 ************************************ 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JiCXnwW0nL 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81993 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81993 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 81993 ']' 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.171 23:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.171 [2024-11-18 23:05:16.520039] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:57.171 [2024-11-18 23:05:16.520146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81993 ] 00:09:57.432 [2024-11-18 23:05:16.678682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.432 [2024-11-18 23:05:16.722812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.432 [2024-11-18 23:05:16.764642] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.432 [2024-11-18 23:05:16.764676] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.001 BaseBdev1_malloc 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.001 true 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.001 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.001 [2024-11-18 23:05:17.374526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:58.001 [2024-11-18 23:05:17.374576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.001 [2024-11-18 23:05:17.374603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:58.001 [2024-11-18 23:05:17.374612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.001 [2024-11-18 23:05:17.376785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.001 [2024-11-18 23:05:17.376888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:58.261 BaseBdev1 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.261 BaseBdev2_malloc 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.261 true 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.261 [2024-11-18 23:05:17.431625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:58.261 [2024-11-18 23:05:17.431687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.261 [2024-11-18 23:05:17.431713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:58.261 [2024-11-18 23:05:17.431726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.261 [2024-11-18 23:05:17.434564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.261 [2024-11-18 23:05:17.434608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:58.261 BaseBdev2 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.261 BaseBdev3_malloc 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.261 true 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.261 [2024-11-18 23:05:17.472001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:58.261 [2024-11-18 23:05:17.472046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.261 [2024-11-18 23:05:17.472065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:58.261 [2024-11-18 23:05:17.472073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.261 [2024-11-18 23:05:17.474092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.261 [2024-11-18 23:05:17.474127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:58.261 BaseBdev3 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.261 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.261 BaseBdev4_malloc 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.262 true 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.262 [2024-11-18 23:05:17.512418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:58.262 [2024-11-18 23:05:17.512472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.262 [2024-11-18 23:05:17.512493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:58.262 [2024-11-18 23:05:17.512501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.262 [2024-11-18 23:05:17.514447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.262 [2024-11-18 23:05:17.514532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:58.262 BaseBdev4 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.262 [2024-11-18 23:05:17.524453] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.262 [2024-11-18 23:05:17.526242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.262 [2024-11-18 23:05:17.526356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.262 [2024-11-18 23:05:17.526409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:58.262 [2024-11-18 23:05:17.526591] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:58.262 [2024-11-18 23:05:17.526602] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:58.262 [2024-11-18 23:05:17.526856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:58.262 [2024-11-18 23:05:17.526995] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:58.262 [2024-11-18 23:05:17.527008] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:58.262 [2024-11-18 23:05:17.527139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.262 "name": "raid_bdev1", 00:09:58.262 "uuid": "7f1f2a77-2904-41cc-84cd-d744f8f121a1", 00:09:58.262 "strip_size_kb": 64, 00:09:58.262 "state": "online", 00:09:58.262 "raid_level": "raid0", 00:09:58.262 "superblock": true, 00:09:58.262 "num_base_bdevs": 4, 00:09:58.262 "num_base_bdevs_discovered": 4, 00:09:58.262 "num_base_bdevs_operational": 4, 00:09:58.262 "base_bdevs_list": [ 00:09:58.262 { 00:09:58.262 "name": "BaseBdev1", 00:09:58.262 "uuid": "64238102-43d9-54e4-8bc1-b409daafa171", 00:09:58.262 "is_configured": true, 00:09:58.262 "data_offset": 2048, 00:09:58.262 "data_size": 63488 00:09:58.262 }, 00:09:58.262 { 00:09:58.262 "name": "BaseBdev2", 00:09:58.262 "uuid": "39af9995-0141-52e7-ae80-da070965f48f", 00:09:58.262 "is_configured": true, 00:09:58.262 "data_offset": 2048, 00:09:58.262 "data_size": 63488 00:09:58.262 }, 00:09:58.262 { 00:09:58.262 "name": "BaseBdev3", 00:09:58.262 "uuid": "c3762ffd-37cb-59d7-bf09-501ba6fa2086", 00:09:58.262 "is_configured": true, 00:09:58.262 "data_offset": 2048, 00:09:58.262 "data_size": 63488 00:09:58.262 }, 00:09:58.262 { 00:09:58.262 "name": "BaseBdev4", 00:09:58.262 "uuid": "7911ce92-e521-5ef1-9b74-c2e59abdd573", 00:09:58.262 "is_configured": true, 00:09:58.262 "data_offset": 2048, 00:09:58.262 "data_size": 63488 00:09:58.262 } 00:09:58.262 ] 00:09:58.262 }' 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.262 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.832 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:58.832 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:58.832 [2024-11-18 23:05:18.055914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.774 23:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.774 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.774 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.774 "name": "raid_bdev1", 00:09:59.774 "uuid": "7f1f2a77-2904-41cc-84cd-d744f8f121a1", 00:09:59.774 "strip_size_kb": 64, 00:09:59.774 "state": "online", 00:09:59.774 "raid_level": "raid0", 00:09:59.774 "superblock": true, 00:09:59.774 "num_base_bdevs": 4, 00:09:59.774 "num_base_bdevs_discovered": 4, 00:09:59.774 "num_base_bdevs_operational": 4, 00:09:59.774 "base_bdevs_list": [ 00:09:59.774 { 00:09:59.774 "name": "BaseBdev1", 00:09:59.774 "uuid": "64238102-43d9-54e4-8bc1-b409daafa171", 00:09:59.774 "is_configured": true, 00:09:59.774 "data_offset": 2048, 00:09:59.774 "data_size": 63488 00:09:59.774 }, 00:09:59.774 { 00:09:59.774 "name": "BaseBdev2", 00:09:59.774 "uuid": "39af9995-0141-52e7-ae80-da070965f48f", 00:09:59.774 "is_configured": true, 00:09:59.774 "data_offset": 2048, 00:09:59.774 "data_size": 63488 00:09:59.774 }, 00:09:59.774 { 00:09:59.774 "name": "BaseBdev3", 00:09:59.774 "uuid": "c3762ffd-37cb-59d7-bf09-501ba6fa2086", 00:09:59.774 "is_configured": true, 00:09:59.774 "data_offset": 2048, 00:09:59.774 "data_size": 63488 00:09:59.774 }, 00:09:59.774 { 00:09:59.774 "name": "BaseBdev4", 00:09:59.774 "uuid": "7911ce92-e521-5ef1-9b74-c2e59abdd573", 00:09:59.774 "is_configured": true, 00:09:59.774 "data_offset": 2048, 00:09:59.774 "data_size": 63488 00:09:59.774 } 00:09:59.774 ] 00:09:59.774 }' 00:09:59.774 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.774 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.342 [2024-11-18 23:05:19.487626] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.342 [2024-11-18 23:05:19.487714] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.342 [2024-11-18 23:05:19.490122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.342 [2024-11-18 23:05:19.490211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.342 [2024-11-18 23:05:19.490305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.342 [2024-11-18 23:05:19.490356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:00.342 { 00:10:00.342 "results": [ 00:10:00.342 { 00:10:00.342 "job": "raid_bdev1", 00:10:00.342 "core_mask": "0x1", 00:10:00.342 "workload": "randrw", 00:10:00.342 "percentage": 50, 00:10:00.342 "status": "finished", 00:10:00.342 "queue_depth": 1, 00:10:00.342 "io_size": 131072, 00:10:00.342 "runtime": 1.432727, 00:10:00.342 "iops": 17179.82560529675, 00:10:00.342 "mibps": 2147.478200662094, 00:10:00.342 "io_failed": 1, 00:10:00.342 "io_timeout": 0, 00:10:00.342 "avg_latency_us": 80.87127985828927, 00:10:00.342 "min_latency_us": 24.593886462882097, 00:10:00.342 "max_latency_us": 1373.6803493449781 00:10:00.342 } 00:10:00.342 ], 00:10:00.342 "core_count": 1 00:10:00.342 } 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81993 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 81993 ']' 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 81993 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81993 00:10:00.342 killing process with pid 81993 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81993' 00:10:00.342 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 81993 00:10:00.343 [2024-11-18 23:05:19.529297] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.343 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 81993 00:10:00.343 [2024-11-18 23:05:19.564588] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.603 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:00.603 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JiCXnwW0nL 00:10:00.603 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:00.603 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:00.603 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:00.603 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.603 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:00.603 23:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:00.603 00:10:00.603 real 0m3.390s 00:10:00.603 user 0m4.304s 00:10:00.603 sys 0m0.531s 00:10:00.603 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.603 23:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.603 ************************************ 00:10:00.603 END TEST raid_write_error_test 00:10:00.603 ************************************ 00:10:00.603 23:05:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:00.603 23:05:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:00.603 23:05:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:00.603 23:05:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.603 23:05:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.603 ************************************ 00:10:00.603 START TEST raid_state_function_test 00:10:00.603 ************************************ 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82125 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82125' 00:10:00.603 Process raid pid: 82125 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82125 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82125 ']' 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.603 23:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.603 [2024-11-18 23:05:19.971843] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:00.604 [2024-11-18 23:05:19.972049] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.861 [2024-11-18 23:05:20.135518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.862 [2024-11-18 23:05:20.182947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.862 [2024-11-18 23:05:20.224929] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.862 [2024-11-18 23:05:20.224982] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.431 [2024-11-18 23:05:20.790433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.431 [2024-11-18 23:05:20.790542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.431 [2024-11-18 23:05:20.790567] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.431 [2024-11-18 23:05:20.790578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.431 [2024-11-18 23:05:20.790584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.431 [2024-11-18 23:05:20.790596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.431 [2024-11-18 23:05:20.790602] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:01.431 [2024-11-18 23:05:20.790610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.431 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.691 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.691 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.691 "name": "Existed_Raid", 00:10:01.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.691 "strip_size_kb": 64, 00:10:01.691 "state": "configuring", 00:10:01.691 "raid_level": "concat", 00:10:01.691 "superblock": false, 00:10:01.691 "num_base_bdevs": 4, 00:10:01.691 "num_base_bdevs_discovered": 0, 00:10:01.691 "num_base_bdevs_operational": 4, 00:10:01.691 "base_bdevs_list": [ 00:10:01.691 { 00:10:01.691 "name": "BaseBdev1", 00:10:01.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.691 "is_configured": false, 00:10:01.691 "data_offset": 0, 00:10:01.691 "data_size": 0 00:10:01.691 }, 00:10:01.691 { 00:10:01.691 "name": "BaseBdev2", 00:10:01.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.691 "is_configured": false, 00:10:01.691 "data_offset": 0, 00:10:01.691 "data_size": 0 00:10:01.691 }, 00:10:01.691 { 00:10:01.691 "name": "BaseBdev3", 00:10:01.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.691 "is_configured": false, 00:10:01.691 "data_offset": 0, 00:10:01.691 "data_size": 0 00:10:01.691 }, 00:10:01.691 { 00:10:01.691 "name": "BaseBdev4", 00:10:01.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.691 "is_configured": false, 00:10:01.691 "data_offset": 0, 00:10:01.691 "data_size": 0 00:10:01.691 } 00:10:01.691 ] 00:10:01.691 }' 00:10:01.691 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.691 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 [2024-11-18 23:05:21.237546] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.950 [2024-11-18 23:05:21.237630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 [2024-11-18 23:05:21.249576] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.950 [2024-11-18 23:05:21.249649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.950 [2024-11-18 23:05:21.249693] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.950 [2024-11-18 23:05:21.249715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.950 [2024-11-18 23:05:21.249778] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.950 [2024-11-18 23:05:21.249800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.950 [2024-11-18 23:05:21.249828] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:01.950 [2024-11-18 23:05:21.249856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 [2024-11-18 23:05:21.270213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.950 BaseBdev1 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 [ 00:10:01.950 { 00:10:01.950 "name": "BaseBdev1", 00:10:01.950 "aliases": [ 00:10:01.950 "19fb2568-b6ce-42c6-90de-ffa6b4a17328" 00:10:01.950 ], 00:10:01.950 "product_name": "Malloc disk", 00:10:01.950 "block_size": 512, 00:10:01.950 "num_blocks": 65536, 00:10:01.950 "uuid": "19fb2568-b6ce-42c6-90de-ffa6b4a17328", 00:10:01.950 "assigned_rate_limits": { 00:10:01.950 "rw_ios_per_sec": 0, 00:10:01.950 "rw_mbytes_per_sec": 0, 00:10:01.950 "r_mbytes_per_sec": 0, 00:10:01.950 "w_mbytes_per_sec": 0 00:10:01.950 }, 00:10:01.950 "claimed": true, 00:10:01.950 "claim_type": "exclusive_write", 00:10:01.950 "zoned": false, 00:10:01.950 "supported_io_types": { 00:10:01.950 "read": true, 00:10:01.950 "write": true, 00:10:01.950 "unmap": true, 00:10:01.950 "flush": true, 00:10:01.950 "reset": true, 00:10:01.950 "nvme_admin": false, 00:10:01.950 "nvme_io": false, 00:10:01.950 "nvme_io_md": false, 00:10:01.950 "write_zeroes": true, 00:10:01.950 "zcopy": true, 00:10:01.950 "get_zone_info": false, 00:10:01.950 "zone_management": false, 00:10:01.950 "zone_append": false, 00:10:01.950 "compare": false, 00:10:01.950 "compare_and_write": false, 00:10:01.950 "abort": true, 00:10:01.950 "seek_hole": false, 00:10:01.950 "seek_data": false, 00:10:01.950 "copy": true, 00:10:01.950 "nvme_iov_md": false 00:10:01.950 }, 00:10:01.950 "memory_domains": [ 00:10:01.950 { 00:10:01.950 "dma_device_id": "system", 00:10:01.950 "dma_device_type": 1 00:10:01.950 }, 00:10:01.950 { 00:10:01.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.950 "dma_device_type": 2 00:10:01.950 } 00:10:01.950 ], 00:10:01.950 "driver_specific": {} 00:10:01.950 } 00:10:01.950 ] 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:01.950 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.951 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.211 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.211 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.211 "name": "Existed_Raid", 00:10:02.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.211 "strip_size_kb": 64, 00:10:02.211 "state": "configuring", 00:10:02.211 "raid_level": "concat", 00:10:02.211 "superblock": false, 00:10:02.211 "num_base_bdevs": 4, 00:10:02.211 "num_base_bdevs_discovered": 1, 00:10:02.211 "num_base_bdevs_operational": 4, 00:10:02.211 "base_bdevs_list": [ 00:10:02.211 { 00:10:02.211 "name": "BaseBdev1", 00:10:02.211 "uuid": "19fb2568-b6ce-42c6-90de-ffa6b4a17328", 00:10:02.211 "is_configured": true, 00:10:02.211 "data_offset": 0, 00:10:02.211 "data_size": 65536 00:10:02.211 }, 00:10:02.211 { 00:10:02.211 "name": "BaseBdev2", 00:10:02.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.211 "is_configured": false, 00:10:02.211 "data_offset": 0, 00:10:02.211 "data_size": 0 00:10:02.211 }, 00:10:02.211 { 00:10:02.211 "name": "BaseBdev3", 00:10:02.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.211 "is_configured": false, 00:10:02.211 "data_offset": 0, 00:10:02.211 "data_size": 0 00:10:02.211 }, 00:10:02.211 { 00:10:02.211 "name": "BaseBdev4", 00:10:02.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.211 "is_configured": false, 00:10:02.211 "data_offset": 0, 00:10:02.211 "data_size": 0 00:10:02.211 } 00:10:02.211 ] 00:10:02.211 }' 00:10:02.211 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.211 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.470 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.470 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.470 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.470 [2024-11-18 23:05:21.753423] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.470 [2024-11-18 23:05:21.753466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:02.470 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.471 [2024-11-18 23:05:21.765447] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.471 [2024-11-18 23:05:21.767249] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.471 [2024-11-18 23:05:21.767302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.471 [2024-11-18 23:05:21.767312] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.471 [2024-11-18 23:05:21.767320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.471 [2024-11-18 23:05:21.767326] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.471 [2024-11-18 23:05:21.767334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.471 "name": "Existed_Raid", 00:10:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.471 "strip_size_kb": 64, 00:10:02.471 "state": "configuring", 00:10:02.471 "raid_level": "concat", 00:10:02.471 "superblock": false, 00:10:02.471 "num_base_bdevs": 4, 00:10:02.471 "num_base_bdevs_discovered": 1, 00:10:02.471 "num_base_bdevs_operational": 4, 00:10:02.471 "base_bdevs_list": [ 00:10:02.471 { 00:10:02.471 "name": "BaseBdev1", 00:10:02.471 "uuid": "19fb2568-b6ce-42c6-90de-ffa6b4a17328", 00:10:02.471 "is_configured": true, 00:10:02.471 "data_offset": 0, 00:10:02.471 "data_size": 65536 00:10:02.471 }, 00:10:02.471 { 00:10:02.471 "name": "BaseBdev2", 00:10:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.471 "is_configured": false, 00:10:02.471 "data_offset": 0, 00:10:02.471 "data_size": 0 00:10:02.471 }, 00:10:02.471 { 00:10:02.471 "name": "BaseBdev3", 00:10:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.471 "is_configured": false, 00:10:02.471 "data_offset": 0, 00:10:02.471 "data_size": 0 00:10:02.471 }, 00:10:02.471 { 00:10:02.471 "name": "BaseBdev4", 00:10:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.471 "is_configured": false, 00:10:02.471 "data_offset": 0, 00:10:02.471 "data_size": 0 00:10:02.471 } 00:10:02.471 ] 00:10:02.471 }' 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.471 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.040 [2024-11-18 23:05:22.222854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.040 BaseBdev2 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.040 [ 00:10:03.040 { 00:10:03.040 "name": "BaseBdev2", 00:10:03.040 "aliases": [ 00:10:03.040 "a28dd13d-91e7-4c91-b1a8-dbee016a1fdb" 00:10:03.040 ], 00:10:03.040 "product_name": "Malloc disk", 00:10:03.040 "block_size": 512, 00:10:03.040 "num_blocks": 65536, 00:10:03.040 "uuid": "a28dd13d-91e7-4c91-b1a8-dbee016a1fdb", 00:10:03.040 "assigned_rate_limits": { 00:10:03.040 "rw_ios_per_sec": 0, 00:10:03.040 "rw_mbytes_per_sec": 0, 00:10:03.040 "r_mbytes_per_sec": 0, 00:10:03.040 "w_mbytes_per_sec": 0 00:10:03.040 }, 00:10:03.040 "claimed": true, 00:10:03.040 "claim_type": "exclusive_write", 00:10:03.040 "zoned": false, 00:10:03.040 "supported_io_types": { 00:10:03.040 "read": true, 00:10:03.040 "write": true, 00:10:03.040 "unmap": true, 00:10:03.040 "flush": true, 00:10:03.040 "reset": true, 00:10:03.040 "nvme_admin": false, 00:10:03.040 "nvme_io": false, 00:10:03.040 "nvme_io_md": false, 00:10:03.040 "write_zeroes": true, 00:10:03.040 "zcopy": true, 00:10:03.040 "get_zone_info": false, 00:10:03.040 "zone_management": false, 00:10:03.040 "zone_append": false, 00:10:03.040 "compare": false, 00:10:03.040 "compare_and_write": false, 00:10:03.040 "abort": true, 00:10:03.040 "seek_hole": false, 00:10:03.040 "seek_data": false, 00:10:03.040 "copy": true, 00:10:03.040 "nvme_iov_md": false 00:10:03.040 }, 00:10:03.040 "memory_domains": [ 00:10:03.040 { 00:10:03.040 "dma_device_id": "system", 00:10:03.040 "dma_device_type": 1 00:10:03.040 }, 00:10:03.040 { 00:10:03.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.040 "dma_device_type": 2 00:10:03.040 } 00:10:03.040 ], 00:10:03.040 "driver_specific": {} 00:10:03.040 } 00:10:03.040 ] 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.040 "name": "Existed_Raid", 00:10:03.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.040 "strip_size_kb": 64, 00:10:03.040 "state": "configuring", 00:10:03.040 "raid_level": "concat", 00:10:03.040 "superblock": false, 00:10:03.040 "num_base_bdevs": 4, 00:10:03.040 "num_base_bdevs_discovered": 2, 00:10:03.040 "num_base_bdevs_operational": 4, 00:10:03.040 "base_bdevs_list": [ 00:10:03.040 { 00:10:03.040 "name": "BaseBdev1", 00:10:03.040 "uuid": "19fb2568-b6ce-42c6-90de-ffa6b4a17328", 00:10:03.040 "is_configured": true, 00:10:03.040 "data_offset": 0, 00:10:03.040 "data_size": 65536 00:10:03.040 }, 00:10:03.040 { 00:10:03.040 "name": "BaseBdev2", 00:10:03.040 "uuid": "a28dd13d-91e7-4c91-b1a8-dbee016a1fdb", 00:10:03.040 "is_configured": true, 00:10:03.040 "data_offset": 0, 00:10:03.040 "data_size": 65536 00:10:03.040 }, 00:10:03.040 { 00:10:03.040 "name": "BaseBdev3", 00:10:03.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.040 "is_configured": false, 00:10:03.040 "data_offset": 0, 00:10:03.040 "data_size": 0 00:10:03.040 }, 00:10:03.040 { 00:10:03.040 "name": "BaseBdev4", 00:10:03.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.040 "is_configured": false, 00:10:03.040 "data_offset": 0, 00:10:03.040 "data_size": 0 00:10:03.040 } 00:10:03.040 ] 00:10:03.040 }' 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.040 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.608 [2024-11-18 23:05:22.712919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.608 BaseBdev3 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.608 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.608 [ 00:10:03.608 { 00:10:03.608 "name": "BaseBdev3", 00:10:03.608 "aliases": [ 00:10:03.608 "3159b68a-b148-4df1-b45b-61b4c3fde0b9" 00:10:03.608 ], 00:10:03.608 "product_name": "Malloc disk", 00:10:03.608 "block_size": 512, 00:10:03.608 "num_blocks": 65536, 00:10:03.608 "uuid": "3159b68a-b148-4df1-b45b-61b4c3fde0b9", 00:10:03.608 "assigned_rate_limits": { 00:10:03.608 "rw_ios_per_sec": 0, 00:10:03.608 "rw_mbytes_per_sec": 0, 00:10:03.608 "r_mbytes_per_sec": 0, 00:10:03.608 "w_mbytes_per_sec": 0 00:10:03.608 }, 00:10:03.608 "claimed": true, 00:10:03.608 "claim_type": "exclusive_write", 00:10:03.608 "zoned": false, 00:10:03.608 "supported_io_types": { 00:10:03.608 "read": true, 00:10:03.608 "write": true, 00:10:03.608 "unmap": true, 00:10:03.608 "flush": true, 00:10:03.608 "reset": true, 00:10:03.608 "nvme_admin": false, 00:10:03.608 "nvme_io": false, 00:10:03.608 "nvme_io_md": false, 00:10:03.608 "write_zeroes": true, 00:10:03.608 "zcopy": true, 00:10:03.608 "get_zone_info": false, 00:10:03.608 "zone_management": false, 00:10:03.608 "zone_append": false, 00:10:03.608 "compare": false, 00:10:03.608 "compare_and_write": false, 00:10:03.609 "abort": true, 00:10:03.609 "seek_hole": false, 00:10:03.609 "seek_data": false, 00:10:03.609 "copy": true, 00:10:03.609 "nvme_iov_md": false 00:10:03.609 }, 00:10:03.609 "memory_domains": [ 00:10:03.609 { 00:10:03.609 "dma_device_id": "system", 00:10:03.609 "dma_device_type": 1 00:10:03.609 }, 00:10:03.609 { 00:10:03.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.609 "dma_device_type": 2 00:10:03.609 } 00:10:03.609 ], 00:10:03.609 "driver_specific": {} 00:10:03.609 } 00:10:03.609 ] 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.609 "name": "Existed_Raid", 00:10:03.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.609 "strip_size_kb": 64, 00:10:03.609 "state": "configuring", 00:10:03.609 "raid_level": "concat", 00:10:03.609 "superblock": false, 00:10:03.609 "num_base_bdevs": 4, 00:10:03.609 "num_base_bdevs_discovered": 3, 00:10:03.609 "num_base_bdevs_operational": 4, 00:10:03.609 "base_bdevs_list": [ 00:10:03.609 { 00:10:03.609 "name": "BaseBdev1", 00:10:03.609 "uuid": "19fb2568-b6ce-42c6-90de-ffa6b4a17328", 00:10:03.609 "is_configured": true, 00:10:03.609 "data_offset": 0, 00:10:03.609 "data_size": 65536 00:10:03.609 }, 00:10:03.609 { 00:10:03.609 "name": "BaseBdev2", 00:10:03.609 "uuid": "a28dd13d-91e7-4c91-b1a8-dbee016a1fdb", 00:10:03.609 "is_configured": true, 00:10:03.609 "data_offset": 0, 00:10:03.609 "data_size": 65536 00:10:03.609 }, 00:10:03.609 { 00:10:03.609 "name": "BaseBdev3", 00:10:03.609 "uuid": "3159b68a-b148-4df1-b45b-61b4c3fde0b9", 00:10:03.609 "is_configured": true, 00:10:03.609 "data_offset": 0, 00:10:03.609 "data_size": 65536 00:10:03.609 }, 00:10:03.609 { 00:10:03.609 "name": "BaseBdev4", 00:10:03.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.609 "is_configured": false, 00:10:03.609 "data_offset": 0, 00:10:03.609 "data_size": 0 00:10:03.609 } 00:10:03.609 ] 00:10:03.609 }' 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.609 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.869 [2024-11-18 23:05:23.199031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:03.869 [2024-11-18 23:05:23.199083] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:03.869 [2024-11-18 23:05:23.199093] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:03.869 [2024-11-18 23:05:23.199387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:03.869 [2024-11-18 23:05:23.199544] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:03.869 [2024-11-18 23:05:23.199561] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:03.869 [2024-11-18 23:05:23.199768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.869 BaseBdev4 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.869 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.869 [ 00:10:03.869 { 00:10:03.869 "name": "BaseBdev4", 00:10:03.869 "aliases": [ 00:10:03.869 "699c96b6-c832-4d36-b9e0-7c1072777b19" 00:10:03.869 ], 00:10:03.869 "product_name": "Malloc disk", 00:10:03.869 "block_size": 512, 00:10:03.869 "num_blocks": 65536, 00:10:03.869 "uuid": "699c96b6-c832-4d36-b9e0-7c1072777b19", 00:10:03.869 "assigned_rate_limits": { 00:10:03.869 "rw_ios_per_sec": 0, 00:10:03.869 "rw_mbytes_per_sec": 0, 00:10:03.869 "r_mbytes_per_sec": 0, 00:10:03.869 "w_mbytes_per_sec": 0 00:10:03.869 }, 00:10:03.869 "claimed": true, 00:10:03.869 "claim_type": "exclusive_write", 00:10:03.869 "zoned": false, 00:10:03.869 "supported_io_types": { 00:10:03.869 "read": true, 00:10:03.869 "write": true, 00:10:03.869 "unmap": true, 00:10:03.870 "flush": true, 00:10:03.870 "reset": true, 00:10:03.870 "nvme_admin": false, 00:10:03.870 "nvme_io": false, 00:10:03.870 "nvme_io_md": false, 00:10:03.870 "write_zeroes": true, 00:10:03.870 "zcopy": true, 00:10:03.870 "get_zone_info": false, 00:10:03.870 "zone_management": false, 00:10:03.870 "zone_append": false, 00:10:03.870 "compare": false, 00:10:03.870 "compare_and_write": false, 00:10:03.870 "abort": true, 00:10:03.870 "seek_hole": false, 00:10:03.870 "seek_data": false, 00:10:03.870 "copy": true, 00:10:03.870 "nvme_iov_md": false 00:10:03.870 }, 00:10:03.870 "memory_domains": [ 00:10:03.870 { 00:10:03.870 "dma_device_id": "system", 00:10:03.870 "dma_device_type": 1 00:10:03.870 }, 00:10:03.870 { 00:10:03.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.870 "dma_device_type": 2 00:10:03.870 } 00:10:03.870 ], 00:10:03.870 "driver_specific": {} 00:10:03.870 } 00:10:03.870 ] 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.870 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.201 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.201 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.201 "name": "Existed_Raid", 00:10:04.201 "uuid": "03370e14-ec93-40bf-8919-5e54cf5d664b", 00:10:04.201 "strip_size_kb": 64, 00:10:04.201 "state": "online", 00:10:04.201 "raid_level": "concat", 00:10:04.201 "superblock": false, 00:10:04.201 "num_base_bdevs": 4, 00:10:04.201 "num_base_bdevs_discovered": 4, 00:10:04.201 "num_base_bdevs_operational": 4, 00:10:04.201 "base_bdevs_list": [ 00:10:04.201 { 00:10:04.201 "name": "BaseBdev1", 00:10:04.201 "uuid": "19fb2568-b6ce-42c6-90de-ffa6b4a17328", 00:10:04.201 "is_configured": true, 00:10:04.201 "data_offset": 0, 00:10:04.201 "data_size": 65536 00:10:04.201 }, 00:10:04.201 { 00:10:04.201 "name": "BaseBdev2", 00:10:04.201 "uuid": "a28dd13d-91e7-4c91-b1a8-dbee016a1fdb", 00:10:04.201 "is_configured": true, 00:10:04.201 "data_offset": 0, 00:10:04.201 "data_size": 65536 00:10:04.201 }, 00:10:04.201 { 00:10:04.201 "name": "BaseBdev3", 00:10:04.201 "uuid": "3159b68a-b148-4df1-b45b-61b4c3fde0b9", 00:10:04.201 "is_configured": true, 00:10:04.201 "data_offset": 0, 00:10:04.201 "data_size": 65536 00:10:04.201 }, 00:10:04.201 { 00:10:04.201 "name": "BaseBdev4", 00:10:04.201 "uuid": "699c96b6-c832-4d36-b9e0-7c1072777b19", 00:10:04.201 "is_configured": true, 00:10:04.201 "data_offset": 0, 00:10:04.201 "data_size": 65536 00:10:04.201 } 00:10:04.201 ] 00:10:04.201 }' 00:10:04.201 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.201 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.464 [2024-11-18 23:05:23.658624] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.464 "name": "Existed_Raid", 00:10:04.464 "aliases": [ 00:10:04.464 "03370e14-ec93-40bf-8919-5e54cf5d664b" 00:10:04.464 ], 00:10:04.464 "product_name": "Raid Volume", 00:10:04.464 "block_size": 512, 00:10:04.464 "num_blocks": 262144, 00:10:04.464 "uuid": "03370e14-ec93-40bf-8919-5e54cf5d664b", 00:10:04.464 "assigned_rate_limits": { 00:10:04.464 "rw_ios_per_sec": 0, 00:10:04.464 "rw_mbytes_per_sec": 0, 00:10:04.464 "r_mbytes_per_sec": 0, 00:10:04.464 "w_mbytes_per_sec": 0 00:10:04.464 }, 00:10:04.464 "claimed": false, 00:10:04.464 "zoned": false, 00:10:04.464 "supported_io_types": { 00:10:04.464 "read": true, 00:10:04.464 "write": true, 00:10:04.464 "unmap": true, 00:10:04.464 "flush": true, 00:10:04.464 "reset": true, 00:10:04.464 "nvme_admin": false, 00:10:04.464 "nvme_io": false, 00:10:04.464 "nvme_io_md": false, 00:10:04.464 "write_zeroes": true, 00:10:04.464 "zcopy": false, 00:10:04.464 "get_zone_info": false, 00:10:04.464 "zone_management": false, 00:10:04.464 "zone_append": false, 00:10:04.464 "compare": false, 00:10:04.464 "compare_and_write": false, 00:10:04.464 "abort": false, 00:10:04.464 "seek_hole": false, 00:10:04.464 "seek_data": false, 00:10:04.464 "copy": false, 00:10:04.464 "nvme_iov_md": false 00:10:04.464 }, 00:10:04.464 "memory_domains": [ 00:10:04.464 { 00:10:04.464 "dma_device_id": "system", 00:10:04.464 "dma_device_type": 1 00:10:04.464 }, 00:10:04.464 { 00:10:04.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.464 "dma_device_type": 2 00:10:04.464 }, 00:10:04.464 { 00:10:04.464 "dma_device_id": "system", 00:10:04.464 "dma_device_type": 1 00:10:04.464 }, 00:10:04.464 { 00:10:04.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.464 "dma_device_type": 2 00:10:04.464 }, 00:10:04.464 { 00:10:04.464 "dma_device_id": "system", 00:10:04.464 "dma_device_type": 1 00:10:04.464 }, 00:10:04.464 { 00:10:04.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.464 "dma_device_type": 2 00:10:04.464 }, 00:10:04.464 { 00:10:04.464 "dma_device_id": "system", 00:10:04.464 "dma_device_type": 1 00:10:04.464 }, 00:10:04.464 { 00:10:04.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.464 "dma_device_type": 2 00:10:04.464 } 00:10:04.464 ], 00:10:04.464 "driver_specific": { 00:10:04.464 "raid": { 00:10:04.464 "uuid": "03370e14-ec93-40bf-8919-5e54cf5d664b", 00:10:04.464 "strip_size_kb": 64, 00:10:04.464 "state": "online", 00:10:04.464 "raid_level": "concat", 00:10:04.464 "superblock": false, 00:10:04.464 "num_base_bdevs": 4, 00:10:04.464 "num_base_bdevs_discovered": 4, 00:10:04.464 "num_base_bdevs_operational": 4, 00:10:04.464 "base_bdevs_list": [ 00:10:04.464 { 00:10:04.464 "name": "BaseBdev1", 00:10:04.464 "uuid": "19fb2568-b6ce-42c6-90de-ffa6b4a17328", 00:10:04.464 "is_configured": true, 00:10:04.464 "data_offset": 0, 00:10:04.464 "data_size": 65536 00:10:04.464 }, 00:10:04.464 { 00:10:04.464 "name": "BaseBdev2", 00:10:04.464 "uuid": "a28dd13d-91e7-4c91-b1a8-dbee016a1fdb", 00:10:04.464 "is_configured": true, 00:10:04.464 "data_offset": 0, 00:10:04.464 "data_size": 65536 00:10:04.464 }, 00:10:04.464 { 00:10:04.464 "name": "BaseBdev3", 00:10:04.464 "uuid": "3159b68a-b148-4df1-b45b-61b4c3fde0b9", 00:10:04.464 "is_configured": true, 00:10:04.464 "data_offset": 0, 00:10:04.464 "data_size": 65536 00:10:04.464 }, 00:10:04.464 { 00:10:04.464 "name": "BaseBdev4", 00:10:04.464 "uuid": "699c96b6-c832-4d36-b9e0-7c1072777b19", 00:10:04.464 "is_configured": true, 00:10:04.464 "data_offset": 0, 00:10:04.464 "data_size": 65536 00:10:04.464 } 00:10:04.464 ] 00:10:04.464 } 00:10:04.464 } 00:10:04.464 }' 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:04.464 BaseBdev2 00:10:04.464 BaseBdev3 00:10:04.464 BaseBdev4' 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.464 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.724 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.725 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.725 [2024-11-18 23:05:23.989729] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:04.725 [2024-11-18 23:05:23.989800] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.725 [2024-11-18 23:05:23.989869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.725 "name": "Existed_Raid", 00:10:04.725 "uuid": "03370e14-ec93-40bf-8919-5e54cf5d664b", 00:10:04.725 "strip_size_kb": 64, 00:10:04.725 "state": "offline", 00:10:04.725 "raid_level": "concat", 00:10:04.725 "superblock": false, 00:10:04.725 "num_base_bdevs": 4, 00:10:04.725 "num_base_bdevs_discovered": 3, 00:10:04.725 "num_base_bdevs_operational": 3, 00:10:04.725 "base_bdevs_list": [ 00:10:04.725 { 00:10:04.725 "name": null, 00:10:04.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.725 "is_configured": false, 00:10:04.725 "data_offset": 0, 00:10:04.725 "data_size": 65536 00:10:04.725 }, 00:10:04.725 { 00:10:04.725 "name": "BaseBdev2", 00:10:04.725 "uuid": "a28dd13d-91e7-4c91-b1a8-dbee016a1fdb", 00:10:04.725 "is_configured": true, 00:10:04.725 "data_offset": 0, 00:10:04.725 "data_size": 65536 00:10:04.725 }, 00:10:04.725 { 00:10:04.725 "name": "BaseBdev3", 00:10:04.725 "uuid": "3159b68a-b148-4df1-b45b-61b4c3fde0b9", 00:10:04.725 "is_configured": true, 00:10:04.725 "data_offset": 0, 00:10:04.725 "data_size": 65536 00:10:04.725 }, 00:10:04.725 { 00:10:04.725 "name": "BaseBdev4", 00:10:04.725 "uuid": "699c96b6-c832-4d36-b9e0-7c1072777b19", 00:10:04.725 "is_configured": true, 00:10:04.725 "data_offset": 0, 00:10:04.725 "data_size": 65536 00:10:04.725 } 00:10:04.725 ] 00:10:04.725 }' 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.725 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.293 [2024-11-18 23:05:24.444259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.293 [2024-11-18 23:05:24.511342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.293 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.294 [2024-11-18 23:05:24.562441] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:05.294 [2024-11-18 23:05:24.562525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.294 BaseBdev2 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.294 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.294 [ 00:10:05.294 { 00:10:05.294 "name": "BaseBdev2", 00:10:05.294 "aliases": [ 00:10:05.294 "0f3921e5-1da0-48f8-aa2c-cdb154dcfb65" 00:10:05.294 ], 00:10:05.294 "product_name": "Malloc disk", 00:10:05.294 "block_size": 512, 00:10:05.294 "num_blocks": 65536, 00:10:05.294 "uuid": "0f3921e5-1da0-48f8-aa2c-cdb154dcfb65", 00:10:05.294 "assigned_rate_limits": { 00:10:05.294 "rw_ios_per_sec": 0, 00:10:05.294 "rw_mbytes_per_sec": 0, 00:10:05.294 "r_mbytes_per_sec": 0, 00:10:05.294 "w_mbytes_per_sec": 0 00:10:05.294 }, 00:10:05.554 "claimed": false, 00:10:05.554 "zoned": false, 00:10:05.554 "supported_io_types": { 00:10:05.554 "read": true, 00:10:05.554 "write": true, 00:10:05.554 "unmap": true, 00:10:05.554 "flush": true, 00:10:05.554 "reset": true, 00:10:05.554 "nvme_admin": false, 00:10:05.554 "nvme_io": false, 00:10:05.554 "nvme_io_md": false, 00:10:05.554 "write_zeroes": true, 00:10:05.554 "zcopy": true, 00:10:05.554 "get_zone_info": false, 00:10:05.554 "zone_management": false, 00:10:05.554 "zone_append": false, 00:10:05.554 "compare": false, 00:10:05.554 "compare_and_write": false, 00:10:05.554 "abort": true, 00:10:05.554 "seek_hole": false, 00:10:05.554 "seek_data": false, 00:10:05.554 "copy": true, 00:10:05.554 "nvme_iov_md": false 00:10:05.554 }, 00:10:05.554 "memory_domains": [ 00:10:05.554 { 00:10:05.554 "dma_device_id": "system", 00:10:05.554 "dma_device_type": 1 00:10:05.554 }, 00:10:05.554 { 00:10:05.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.554 "dma_device_type": 2 00:10:05.554 } 00:10:05.554 ], 00:10:05.554 "driver_specific": {} 00:10:05.554 } 00:10:05.554 ] 00:10:05.554 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.554 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:05.554 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.555 BaseBdev3 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.555 [ 00:10:05.555 { 00:10:05.555 "name": "BaseBdev3", 00:10:05.555 "aliases": [ 00:10:05.555 "82c3be17-6710-4304-9388-12d75dabf0a4" 00:10:05.555 ], 00:10:05.555 "product_name": "Malloc disk", 00:10:05.555 "block_size": 512, 00:10:05.555 "num_blocks": 65536, 00:10:05.555 "uuid": "82c3be17-6710-4304-9388-12d75dabf0a4", 00:10:05.555 "assigned_rate_limits": { 00:10:05.555 "rw_ios_per_sec": 0, 00:10:05.555 "rw_mbytes_per_sec": 0, 00:10:05.555 "r_mbytes_per_sec": 0, 00:10:05.555 "w_mbytes_per_sec": 0 00:10:05.555 }, 00:10:05.555 "claimed": false, 00:10:05.555 "zoned": false, 00:10:05.555 "supported_io_types": { 00:10:05.555 "read": true, 00:10:05.555 "write": true, 00:10:05.555 "unmap": true, 00:10:05.555 "flush": true, 00:10:05.555 "reset": true, 00:10:05.555 "nvme_admin": false, 00:10:05.555 "nvme_io": false, 00:10:05.555 "nvme_io_md": false, 00:10:05.555 "write_zeroes": true, 00:10:05.555 "zcopy": true, 00:10:05.555 "get_zone_info": false, 00:10:05.555 "zone_management": false, 00:10:05.555 "zone_append": false, 00:10:05.555 "compare": false, 00:10:05.555 "compare_and_write": false, 00:10:05.555 "abort": true, 00:10:05.555 "seek_hole": false, 00:10:05.555 "seek_data": false, 00:10:05.555 "copy": true, 00:10:05.555 "nvme_iov_md": false 00:10:05.555 }, 00:10:05.555 "memory_domains": [ 00:10:05.555 { 00:10:05.555 "dma_device_id": "system", 00:10:05.555 "dma_device_type": 1 00:10:05.555 }, 00:10:05.555 { 00:10:05.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.555 "dma_device_type": 2 00:10:05.555 } 00:10:05.555 ], 00:10:05.555 "driver_specific": {} 00:10:05.555 } 00:10:05.555 ] 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.555 BaseBdev4 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.555 [ 00:10:05.555 { 00:10:05.555 "name": "BaseBdev4", 00:10:05.555 "aliases": [ 00:10:05.555 "710cce4f-4d06-4178-8701-c5c2b2064a84" 00:10:05.555 ], 00:10:05.555 "product_name": "Malloc disk", 00:10:05.555 "block_size": 512, 00:10:05.555 "num_blocks": 65536, 00:10:05.555 "uuid": "710cce4f-4d06-4178-8701-c5c2b2064a84", 00:10:05.555 "assigned_rate_limits": { 00:10:05.555 "rw_ios_per_sec": 0, 00:10:05.555 "rw_mbytes_per_sec": 0, 00:10:05.555 "r_mbytes_per_sec": 0, 00:10:05.555 "w_mbytes_per_sec": 0 00:10:05.555 }, 00:10:05.555 "claimed": false, 00:10:05.555 "zoned": false, 00:10:05.555 "supported_io_types": { 00:10:05.555 "read": true, 00:10:05.555 "write": true, 00:10:05.555 "unmap": true, 00:10:05.555 "flush": true, 00:10:05.555 "reset": true, 00:10:05.555 "nvme_admin": false, 00:10:05.555 "nvme_io": false, 00:10:05.555 "nvme_io_md": false, 00:10:05.555 "write_zeroes": true, 00:10:05.555 "zcopy": true, 00:10:05.555 "get_zone_info": false, 00:10:05.555 "zone_management": false, 00:10:05.555 "zone_append": false, 00:10:05.555 "compare": false, 00:10:05.555 "compare_and_write": false, 00:10:05.555 "abort": true, 00:10:05.555 "seek_hole": false, 00:10:05.555 "seek_data": false, 00:10:05.555 "copy": true, 00:10:05.555 "nvme_iov_md": false 00:10:05.555 }, 00:10:05.555 "memory_domains": [ 00:10:05.555 { 00:10:05.555 "dma_device_id": "system", 00:10:05.555 "dma_device_type": 1 00:10:05.555 }, 00:10:05.555 { 00:10:05.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.555 "dma_device_type": 2 00:10:05.555 } 00:10:05.555 ], 00:10:05.555 "driver_specific": {} 00:10:05.555 } 00:10:05.555 ] 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:05.555 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.556 [2024-11-18 23:05:24.789712] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.556 [2024-11-18 23:05:24.789808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.556 [2024-11-18 23:05:24.789847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.556 [2024-11-18 23:05:24.791639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.556 [2024-11-18 23:05:24.791727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.556 "name": "Existed_Raid", 00:10:05.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.556 "strip_size_kb": 64, 00:10:05.556 "state": "configuring", 00:10:05.556 "raid_level": "concat", 00:10:05.556 "superblock": false, 00:10:05.556 "num_base_bdevs": 4, 00:10:05.556 "num_base_bdevs_discovered": 3, 00:10:05.556 "num_base_bdevs_operational": 4, 00:10:05.556 "base_bdevs_list": [ 00:10:05.556 { 00:10:05.556 "name": "BaseBdev1", 00:10:05.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.556 "is_configured": false, 00:10:05.556 "data_offset": 0, 00:10:05.556 "data_size": 0 00:10:05.556 }, 00:10:05.556 { 00:10:05.556 "name": "BaseBdev2", 00:10:05.556 "uuid": "0f3921e5-1da0-48f8-aa2c-cdb154dcfb65", 00:10:05.556 "is_configured": true, 00:10:05.556 "data_offset": 0, 00:10:05.556 "data_size": 65536 00:10:05.556 }, 00:10:05.556 { 00:10:05.556 "name": "BaseBdev3", 00:10:05.556 "uuid": "82c3be17-6710-4304-9388-12d75dabf0a4", 00:10:05.556 "is_configured": true, 00:10:05.556 "data_offset": 0, 00:10:05.556 "data_size": 65536 00:10:05.556 }, 00:10:05.556 { 00:10:05.556 "name": "BaseBdev4", 00:10:05.556 "uuid": "710cce4f-4d06-4178-8701-c5c2b2064a84", 00:10:05.556 "is_configured": true, 00:10:05.556 "data_offset": 0, 00:10:05.556 "data_size": 65536 00:10:05.556 } 00:10:05.556 ] 00:10:05.556 }' 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.556 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.124 [2024-11-18 23:05:25.216970] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.124 "name": "Existed_Raid", 00:10:06.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.124 "strip_size_kb": 64, 00:10:06.124 "state": "configuring", 00:10:06.124 "raid_level": "concat", 00:10:06.124 "superblock": false, 00:10:06.124 "num_base_bdevs": 4, 00:10:06.124 "num_base_bdevs_discovered": 2, 00:10:06.124 "num_base_bdevs_operational": 4, 00:10:06.124 "base_bdevs_list": [ 00:10:06.124 { 00:10:06.124 "name": "BaseBdev1", 00:10:06.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.124 "is_configured": false, 00:10:06.124 "data_offset": 0, 00:10:06.124 "data_size": 0 00:10:06.124 }, 00:10:06.124 { 00:10:06.124 "name": null, 00:10:06.124 "uuid": "0f3921e5-1da0-48f8-aa2c-cdb154dcfb65", 00:10:06.124 "is_configured": false, 00:10:06.124 "data_offset": 0, 00:10:06.124 "data_size": 65536 00:10:06.124 }, 00:10:06.124 { 00:10:06.124 "name": "BaseBdev3", 00:10:06.124 "uuid": "82c3be17-6710-4304-9388-12d75dabf0a4", 00:10:06.124 "is_configured": true, 00:10:06.124 "data_offset": 0, 00:10:06.124 "data_size": 65536 00:10:06.124 }, 00:10:06.124 { 00:10:06.124 "name": "BaseBdev4", 00:10:06.124 "uuid": "710cce4f-4d06-4178-8701-c5c2b2064a84", 00:10:06.124 "is_configured": true, 00:10:06.124 "data_offset": 0, 00:10:06.124 "data_size": 65536 00:10:06.124 } 00:10:06.124 ] 00:10:06.124 }' 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.124 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.383 [2024-11-18 23:05:25.647092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.383 BaseBdev1 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.383 [ 00:10:06.383 { 00:10:06.383 "name": "BaseBdev1", 00:10:06.383 "aliases": [ 00:10:06.383 "e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb" 00:10:06.383 ], 00:10:06.383 "product_name": "Malloc disk", 00:10:06.383 "block_size": 512, 00:10:06.383 "num_blocks": 65536, 00:10:06.383 "uuid": "e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb", 00:10:06.383 "assigned_rate_limits": { 00:10:06.383 "rw_ios_per_sec": 0, 00:10:06.383 "rw_mbytes_per_sec": 0, 00:10:06.383 "r_mbytes_per_sec": 0, 00:10:06.383 "w_mbytes_per_sec": 0 00:10:06.383 }, 00:10:06.383 "claimed": true, 00:10:06.383 "claim_type": "exclusive_write", 00:10:06.383 "zoned": false, 00:10:06.383 "supported_io_types": { 00:10:06.383 "read": true, 00:10:06.383 "write": true, 00:10:06.383 "unmap": true, 00:10:06.383 "flush": true, 00:10:06.383 "reset": true, 00:10:06.383 "nvme_admin": false, 00:10:06.383 "nvme_io": false, 00:10:06.383 "nvme_io_md": false, 00:10:06.383 "write_zeroes": true, 00:10:06.383 "zcopy": true, 00:10:06.383 "get_zone_info": false, 00:10:06.383 "zone_management": false, 00:10:06.383 "zone_append": false, 00:10:06.383 "compare": false, 00:10:06.383 "compare_and_write": false, 00:10:06.383 "abort": true, 00:10:06.383 "seek_hole": false, 00:10:06.383 "seek_data": false, 00:10:06.383 "copy": true, 00:10:06.383 "nvme_iov_md": false 00:10:06.383 }, 00:10:06.383 "memory_domains": [ 00:10:06.383 { 00:10:06.383 "dma_device_id": "system", 00:10:06.383 "dma_device_type": 1 00:10:06.383 }, 00:10:06.383 { 00:10:06.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.383 "dma_device_type": 2 00:10:06.383 } 00:10:06.383 ], 00:10:06.383 "driver_specific": {} 00:10:06.383 } 00:10:06.383 ] 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.383 "name": "Existed_Raid", 00:10:06.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.383 "strip_size_kb": 64, 00:10:06.383 "state": "configuring", 00:10:06.383 "raid_level": "concat", 00:10:06.383 "superblock": false, 00:10:06.383 "num_base_bdevs": 4, 00:10:06.383 "num_base_bdevs_discovered": 3, 00:10:06.383 "num_base_bdevs_operational": 4, 00:10:06.383 "base_bdevs_list": [ 00:10:06.383 { 00:10:06.383 "name": "BaseBdev1", 00:10:06.383 "uuid": "e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb", 00:10:06.383 "is_configured": true, 00:10:06.383 "data_offset": 0, 00:10:06.383 "data_size": 65536 00:10:06.383 }, 00:10:06.383 { 00:10:06.383 "name": null, 00:10:06.383 "uuid": "0f3921e5-1da0-48f8-aa2c-cdb154dcfb65", 00:10:06.383 "is_configured": false, 00:10:06.383 "data_offset": 0, 00:10:06.383 "data_size": 65536 00:10:06.383 }, 00:10:06.383 { 00:10:06.383 "name": "BaseBdev3", 00:10:06.383 "uuid": "82c3be17-6710-4304-9388-12d75dabf0a4", 00:10:06.383 "is_configured": true, 00:10:06.383 "data_offset": 0, 00:10:06.383 "data_size": 65536 00:10:06.383 }, 00:10:06.383 { 00:10:06.383 "name": "BaseBdev4", 00:10:06.383 "uuid": "710cce4f-4d06-4178-8701-c5c2b2064a84", 00:10:06.383 "is_configured": true, 00:10:06.383 "data_offset": 0, 00:10:06.383 "data_size": 65536 00:10:06.383 } 00:10:06.383 ] 00:10:06.383 }' 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.383 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.953 [2024-11-18 23:05:26.150241] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.953 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.954 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.954 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.954 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.954 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.954 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.954 "name": "Existed_Raid", 00:10:06.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.954 "strip_size_kb": 64, 00:10:06.954 "state": "configuring", 00:10:06.954 "raid_level": "concat", 00:10:06.954 "superblock": false, 00:10:06.954 "num_base_bdevs": 4, 00:10:06.954 "num_base_bdevs_discovered": 2, 00:10:06.954 "num_base_bdevs_operational": 4, 00:10:06.954 "base_bdevs_list": [ 00:10:06.954 { 00:10:06.954 "name": "BaseBdev1", 00:10:06.954 "uuid": "e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb", 00:10:06.954 "is_configured": true, 00:10:06.954 "data_offset": 0, 00:10:06.954 "data_size": 65536 00:10:06.954 }, 00:10:06.954 { 00:10:06.954 "name": null, 00:10:06.954 "uuid": "0f3921e5-1da0-48f8-aa2c-cdb154dcfb65", 00:10:06.954 "is_configured": false, 00:10:06.954 "data_offset": 0, 00:10:06.954 "data_size": 65536 00:10:06.954 }, 00:10:06.954 { 00:10:06.954 "name": null, 00:10:06.954 "uuid": "82c3be17-6710-4304-9388-12d75dabf0a4", 00:10:06.954 "is_configured": false, 00:10:06.954 "data_offset": 0, 00:10:06.954 "data_size": 65536 00:10:06.954 }, 00:10:06.954 { 00:10:06.954 "name": "BaseBdev4", 00:10:06.954 "uuid": "710cce4f-4d06-4178-8701-c5c2b2064a84", 00:10:06.954 "is_configured": true, 00:10:06.954 "data_offset": 0, 00:10:06.954 "data_size": 65536 00:10:06.954 } 00:10:06.954 ] 00:10:06.954 }' 00:10:06.954 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.954 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.213 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.213 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:07.213 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.213 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.473 [2024-11-18 23:05:26.633461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.473 "name": "Existed_Raid", 00:10:07.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.473 "strip_size_kb": 64, 00:10:07.473 "state": "configuring", 00:10:07.473 "raid_level": "concat", 00:10:07.473 "superblock": false, 00:10:07.473 "num_base_bdevs": 4, 00:10:07.473 "num_base_bdevs_discovered": 3, 00:10:07.473 "num_base_bdevs_operational": 4, 00:10:07.473 "base_bdevs_list": [ 00:10:07.473 { 00:10:07.473 "name": "BaseBdev1", 00:10:07.473 "uuid": "e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb", 00:10:07.473 "is_configured": true, 00:10:07.473 "data_offset": 0, 00:10:07.473 "data_size": 65536 00:10:07.473 }, 00:10:07.473 { 00:10:07.473 "name": null, 00:10:07.473 "uuid": "0f3921e5-1da0-48f8-aa2c-cdb154dcfb65", 00:10:07.473 "is_configured": false, 00:10:07.473 "data_offset": 0, 00:10:07.473 "data_size": 65536 00:10:07.473 }, 00:10:07.473 { 00:10:07.473 "name": "BaseBdev3", 00:10:07.473 "uuid": "82c3be17-6710-4304-9388-12d75dabf0a4", 00:10:07.473 "is_configured": true, 00:10:07.473 "data_offset": 0, 00:10:07.473 "data_size": 65536 00:10:07.473 }, 00:10:07.473 { 00:10:07.473 "name": "BaseBdev4", 00:10:07.473 "uuid": "710cce4f-4d06-4178-8701-c5c2b2064a84", 00:10:07.473 "is_configured": true, 00:10:07.473 "data_offset": 0, 00:10:07.473 "data_size": 65536 00:10:07.473 } 00:10:07.473 ] 00:10:07.473 }' 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.473 23:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.733 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.733 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:07.733 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.733 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.733 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.733 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:07.733 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.733 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.733 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.993 [2024-11-18 23:05:27.112693] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.993 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.993 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:07.993 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.993 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.993 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.993 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.993 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.993 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.993 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.993 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.994 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.994 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.994 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.994 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.994 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.994 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.994 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.994 "name": "Existed_Raid", 00:10:07.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.994 "strip_size_kb": 64, 00:10:07.994 "state": "configuring", 00:10:07.994 "raid_level": "concat", 00:10:07.994 "superblock": false, 00:10:07.994 "num_base_bdevs": 4, 00:10:07.994 "num_base_bdevs_discovered": 2, 00:10:07.994 "num_base_bdevs_operational": 4, 00:10:07.994 "base_bdevs_list": [ 00:10:07.994 { 00:10:07.994 "name": null, 00:10:07.994 "uuid": "e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb", 00:10:07.994 "is_configured": false, 00:10:07.994 "data_offset": 0, 00:10:07.994 "data_size": 65536 00:10:07.994 }, 00:10:07.994 { 00:10:07.994 "name": null, 00:10:07.994 "uuid": "0f3921e5-1da0-48f8-aa2c-cdb154dcfb65", 00:10:07.994 "is_configured": false, 00:10:07.994 "data_offset": 0, 00:10:07.994 "data_size": 65536 00:10:07.994 }, 00:10:07.994 { 00:10:07.994 "name": "BaseBdev3", 00:10:07.994 "uuid": "82c3be17-6710-4304-9388-12d75dabf0a4", 00:10:07.994 "is_configured": true, 00:10:07.994 "data_offset": 0, 00:10:07.994 "data_size": 65536 00:10:07.994 }, 00:10:07.994 { 00:10:07.994 "name": "BaseBdev4", 00:10:07.994 "uuid": "710cce4f-4d06-4178-8701-c5c2b2064a84", 00:10:07.994 "is_configured": true, 00:10:07.994 "data_offset": 0, 00:10:07.994 "data_size": 65536 00:10:07.994 } 00:10:07.994 ] 00:10:07.994 }' 00:10:07.994 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.994 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 [2024-11-18 23:05:27.598459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.514 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.514 "name": "Existed_Raid", 00:10:08.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.514 "strip_size_kb": 64, 00:10:08.514 "state": "configuring", 00:10:08.514 "raid_level": "concat", 00:10:08.514 "superblock": false, 00:10:08.514 "num_base_bdevs": 4, 00:10:08.514 "num_base_bdevs_discovered": 3, 00:10:08.514 "num_base_bdevs_operational": 4, 00:10:08.514 "base_bdevs_list": [ 00:10:08.514 { 00:10:08.514 "name": null, 00:10:08.514 "uuid": "e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb", 00:10:08.514 "is_configured": false, 00:10:08.514 "data_offset": 0, 00:10:08.514 "data_size": 65536 00:10:08.514 }, 00:10:08.514 { 00:10:08.514 "name": "BaseBdev2", 00:10:08.514 "uuid": "0f3921e5-1da0-48f8-aa2c-cdb154dcfb65", 00:10:08.514 "is_configured": true, 00:10:08.514 "data_offset": 0, 00:10:08.514 "data_size": 65536 00:10:08.514 }, 00:10:08.514 { 00:10:08.514 "name": "BaseBdev3", 00:10:08.514 "uuid": "82c3be17-6710-4304-9388-12d75dabf0a4", 00:10:08.514 "is_configured": true, 00:10:08.514 "data_offset": 0, 00:10:08.514 "data_size": 65536 00:10:08.514 }, 00:10:08.514 { 00:10:08.514 "name": "BaseBdev4", 00:10:08.514 "uuid": "710cce4f-4d06-4178-8701-c5c2b2064a84", 00:10:08.514 "is_configured": true, 00:10:08.514 "data_offset": 0, 00:10:08.514 "data_size": 65536 00:10:08.514 } 00:10:08.514 ] 00:10:08.514 }' 00:10:08.514 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.514 23:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.775 [2024-11-18 23:05:28.144372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:08.775 [2024-11-18 23:05:28.144416] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:08.775 [2024-11-18 23:05:28.144424] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:08.775 [2024-11-18 23:05:28.144678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:08.775 [2024-11-18 23:05:28.144786] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:08.775 [2024-11-18 23:05:28.144798] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:08.775 [2024-11-18 23:05:28.144959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.775 NewBaseBdev 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:08.775 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.776 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:08.776 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.776 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.776 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.776 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.776 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.035 [ 00:10:09.035 { 00:10:09.035 "name": "NewBaseBdev", 00:10:09.035 "aliases": [ 00:10:09.035 "e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb" 00:10:09.035 ], 00:10:09.035 "product_name": "Malloc disk", 00:10:09.035 "block_size": 512, 00:10:09.035 "num_blocks": 65536, 00:10:09.035 "uuid": "e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb", 00:10:09.035 "assigned_rate_limits": { 00:10:09.035 "rw_ios_per_sec": 0, 00:10:09.035 "rw_mbytes_per_sec": 0, 00:10:09.035 "r_mbytes_per_sec": 0, 00:10:09.035 "w_mbytes_per_sec": 0 00:10:09.035 }, 00:10:09.035 "claimed": true, 00:10:09.035 "claim_type": "exclusive_write", 00:10:09.035 "zoned": false, 00:10:09.035 "supported_io_types": { 00:10:09.035 "read": true, 00:10:09.035 "write": true, 00:10:09.035 "unmap": true, 00:10:09.035 "flush": true, 00:10:09.035 "reset": true, 00:10:09.035 "nvme_admin": false, 00:10:09.035 "nvme_io": false, 00:10:09.035 "nvme_io_md": false, 00:10:09.035 "write_zeroes": true, 00:10:09.035 "zcopy": true, 00:10:09.035 "get_zone_info": false, 00:10:09.035 "zone_management": false, 00:10:09.035 "zone_append": false, 00:10:09.035 "compare": false, 00:10:09.035 "compare_and_write": false, 00:10:09.035 "abort": true, 00:10:09.035 "seek_hole": false, 00:10:09.035 "seek_data": false, 00:10:09.035 "copy": true, 00:10:09.035 "nvme_iov_md": false 00:10:09.035 }, 00:10:09.035 "memory_domains": [ 00:10:09.035 { 00:10:09.035 "dma_device_id": "system", 00:10:09.035 "dma_device_type": 1 00:10:09.035 }, 00:10:09.035 { 00:10:09.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.035 "dma_device_type": 2 00:10:09.035 } 00:10:09.035 ], 00:10:09.035 "driver_specific": {} 00:10:09.035 } 00:10:09.035 ] 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.035 "name": "Existed_Raid", 00:10:09.035 "uuid": "5abed6e8-2aad-48a0-bf28-24e7b4cff59b", 00:10:09.035 "strip_size_kb": 64, 00:10:09.035 "state": "online", 00:10:09.035 "raid_level": "concat", 00:10:09.035 "superblock": false, 00:10:09.035 "num_base_bdevs": 4, 00:10:09.035 "num_base_bdevs_discovered": 4, 00:10:09.035 "num_base_bdevs_operational": 4, 00:10:09.035 "base_bdevs_list": [ 00:10:09.035 { 00:10:09.035 "name": "NewBaseBdev", 00:10:09.035 "uuid": "e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb", 00:10:09.035 "is_configured": true, 00:10:09.035 "data_offset": 0, 00:10:09.035 "data_size": 65536 00:10:09.035 }, 00:10:09.035 { 00:10:09.035 "name": "BaseBdev2", 00:10:09.035 "uuid": "0f3921e5-1da0-48f8-aa2c-cdb154dcfb65", 00:10:09.035 "is_configured": true, 00:10:09.035 "data_offset": 0, 00:10:09.035 "data_size": 65536 00:10:09.035 }, 00:10:09.035 { 00:10:09.035 "name": "BaseBdev3", 00:10:09.035 "uuid": "82c3be17-6710-4304-9388-12d75dabf0a4", 00:10:09.035 "is_configured": true, 00:10:09.035 "data_offset": 0, 00:10:09.035 "data_size": 65536 00:10:09.035 }, 00:10:09.035 { 00:10:09.035 "name": "BaseBdev4", 00:10:09.035 "uuid": "710cce4f-4d06-4178-8701-c5c2b2064a84", 00:10:09.035 "is_configured": true, 00:10:09.035 "data_offset": 0, 00:10:09.035 "data_size": 65536 00:10:09.035 } 00:10:09.035 ] 00:10:09.035 }' 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.035 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.297 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:09.297 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:09.297 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:09.297 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:09.297 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:09.297 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:09.297 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:09.297 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:09.297 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.297 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.297 [2024-11-18 23:05:28.643906] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.297 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.562 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:09.562 "name": "Existed_Raid", 00:10:09.562 "aliases": [ 00:10:09.562 "5abed6e8-2aad-48a0-bf28-24e7b4cff59b" 00:10:09.562 ], 00:10:09.562 "product_name": "Raid Volume", 00:10:09.562 "block_size": 512, 00:10:09.562 "num_blocks": 262144, 00:10:09.562 "uuid": "5abed6e8-2aad-48a0-bf28-24e7b4cff59b", 00:10:09.562 "assigned_rate_limits": { 00:10:09.562 "rw_ios_per_sec": 0, 00:10:09.562 "rw_mbytes_per_sec": 0, 00:10:09.562 "r_mbytes_per_sec": 0, 00:10:09.562 "w_mbytes_per_sec": 0 00:10:09.562 }, 00:10:09.562 "claimed": false, 00:10:09.562 "zoned": false, 00:10:09.562 "supported_io_types": { 00:10:09.562 "read": true, 00:10:09.562 "write": true, 00:10:09.562 "unmap": true, 00:10:09.562 "flush": true, 00:10:09.562 "reset": true, 00:10:09.562 "nvme_admin": false, 00:10:09.562 "nvme_io": false, 00:10:09.562 "nvme_io_md": false, 00:10:09.562 "write_zeroes": true, 00:10:09.562 "zcopy": false, 00:10:09.562 "get_zone_info": false, 00:10:09.562 "zone_management": false, 00:10:09.562 "zone_append": false, 00:10:09.562 "compare": false, 00:10:09.562 "compare_and_write": false, 00:10:09.562 "abort": false, 00:10:09.562 "seek_hole": false, 00:10:09.562 "seek_data": false, 00:10:09.562 "copy": false, 00:10:09.562 "nvme_iov_md": false 00:10:09.562 }, 00:10:09.562 "memory_domains": [ 00:10:09.562 { 00:10:09.562 "dma_device_id": "system", 00:10:09.562 "dma_device_type": 1 00:10:09.562 }, 00:10:09.562 { 00:10:09.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.562 "dma_device_type": 2 00:10:09.562 }, 00:10:09.562 { 00:10:09.562 "dma_device_id": "system", 00:10:09.562 "dma_device_type": 1 00:10:09.562 }, 00:10:09.562 { 00:10:09.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.562 "dma_device_type": 2 00:10:09.562 }, 00:10:09.562 { 00:10:09.562 "dma_device_id": "system", 00:10:09.562 "dma_device_type": 1 00:10:09.562 }, 00:10:09.562 { 00:10:09.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.562 "dma_device_type": 2 00:10:09.562 }, 00:10:09.562 { 00:10:09.562 "dma_device_id": "system", 00:10:09.562 "dma_device_type": 1 00:10:09.562 }, 00:10:09.562 { 00:10:09.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.562 "dma_device_type": 2 00:10:09.562 } 00:10:09.562 ], 00:10:09.562 "driver_specific": { 00:10:09.562 "raid": { 00:10:09.562 "uuid": "5abed6e8-2aad-48a0-bf28-24e7b4cff59b", 00:10:09.562 "strip_size_kb": 64, 00:10:09.562 "state": "online", 00:10:09.562 "raid_level": "concat", 00:10:09.562 "superblock": false, 00:10:09.562 "num_base_bdevs": 4, 00:10:09.562 "num_base_bdevs_discovered": 4, 00:10:09.562 "num_base_bdevs_operational": 4, 00:10:09.562 "base_bdevs_list": [ 00:10:09.562 { 00:10:09.562 "name": "NewBaseBdev", 00:10:09.562 "uuid": "e1b845c2-5a0c-4c12-9a2e-e8d1e91048bb", 00:10:09.562 "is_configured": true, 00:10:09.562 "data_offset": 0, 00:10:09.562 "data_size": 65536 00:10:09.562 }, 00:10:09.562 { 00:10:09.562 "name": "BaseBdev2", 00:10:09.562 "uuid": "0f3921e5-1da0-48f8-aa2c-cdb154dcfb65", 00:10:09.562 "is_configured": true, 00:10:09.562 "data_offset": 0, 00:10:09.562 "data_size": 65536 00:10:09.562 }, 00:10:09.562 { 00:10:09.562 "name": "BaseBdev3", 00:10:09.562 "uuid": "82c3be17-6710-4304-9388-12d75dabf0a4", 00:10:09.562 "is_configured": true, 00:10:09.562 "data_offset": 0, 00:10:09.562 "data_size": 65536 00:10:09.562 }, 00:10:09.562 { 00:10:09.562 "name": "BaseBdev4", 00:10:09.562 "uuid": "710cce4f-4d06-4178-8701-c5c2b2064a84", 00:10:09.562 "is_configured": true, 00:10:09.562 "data_offset": 0, 00:10:09.562 "data_size": 65536 00:10:09.562 } 00:10:09.562 ] 00:10:09.562 } 00:10:09.562 } 00:10:09.562 }' 00:10:09.562 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.562 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:09.562 BaseBdev2 00:10:09.562 BaseBdev3 00:10:09.562 BaseBdev4' 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.563 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.823 [2024-11-18 23:05:28.959059] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.823 [2024-11-18 23:05:28.959129] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.823 [2024-11-18 23:05:28.959222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.823 [2024-11-18 23:05:28.959328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.823 [2024-11-18 23:05:28.959398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82125 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82125 ']' 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82125 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.823 23:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82125 00:10:09.823 killing process with pid 82125 00:10:09.823 23:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.823 23:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.823 23:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82125' 00:10:09.823 23:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82125 00:10:09.823 [2024-11-18 23:05:29.006591] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:09.823 23:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82125 00:10:09.823 [2024-11-18 23:05:29.046440] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:10.083 ************************************ 00:10:10.083 END TEST raid_state_function_test 00:10:10.083 ************************************ 00:10:10.083 00:10:10.083 real 0m9.408s 00:10:10.083 user 0m16.113s 00:10:10.083 sys 0m1.945s 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.083 23:05:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:10.083 23:05:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:10.083 23:05:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.083 23:05:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.083 ************************************ 00:10:10.083 START TEST raid_state_function_test_sb 00:10:10.083 ************************************ 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:10.083 Process raid pid: 82770 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82770 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82770' 00:10:10.083 23:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82770 00:10:10.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.084 23:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82770 ']' 00:10:10.084 23:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.084 23:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.084 23:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.084 23:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.084 23:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.084 [2024-11-18 23:05:29.452247] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:10.084 [2024-11-18 23:05:29.452386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.343 [2024-11-18 23:05:29.612626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.343 [2024-11-18 23:05:29.656799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.343 [2024-11-18 23:05:29.698403] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.343 [2024-11-18 23:05:29.698524] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.912 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.912 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:10.912 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:10.912 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.912 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.912 [2024-11-18 23:05:30.283811] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.912 [2024-11-18 23:05:30.283867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.912 [2024-11-18 23:05:30.283880] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.912 [2024-11-18 23:05:30.283889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.912 [2024-11-18 23:05:30.283895] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.912 [2024-11-18 23:05:30.283908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.912 [2024-11-18 23:05:30.283914] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:10.912 [2024-11-18 23:05:30.283922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.172 "name": "Existed_Raid", 00:10:11.172 "uuid": "33416067-5d7f-42c5-bf40-905db1b90568", 00:10:11.172 "strip_size_kb": 64, 00:10:11.172 "state": "configuring", 00:10:11.172 "raid_level": "concat", 00:10:11.172 "superblock": true, 00:10:11.172 "num_base_bdevs": 4, 00:10:11.172 "num_base_bdevs_discovered": 0, 00:10:11.172 "num_base_bdevs_operational": 4, 00:10:11.172 "base_bdevs_list": [ 00:10:11.172 { 00:10:11.172 "name": "BaseBdev1", 00:10:11.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.172 "is_configured": false, 00:10:11.172 "data_offset": 0, 00:10:11.172 "data_size": 0 00:10:11.172 }, 00:10:11.172 { 00:10:11.172 "name": "BaseBdev2", 00:10:11.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.172 "is_configured": false, 00:10:11.172 "data_offset": 0, 00:10:11.172 "data_size": 0 00:10:11.172 }, 00:10:11.172 { 00:10:11.172 "name": "BaseBdev3", 00:10:11.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.172 "is_configured": false, 00:10:11.172 "data_offset": 0, 00:10:11.172 "data_size": 0 00:10:11.172 }, 00:10:11.172 { 00:10:11.172 "name": "BaseBdev4", 00:10:11.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.172 "is_configured": false, 00:10:11.172 "data_offset": 0, 00:10:11.172 "data_size": 0 00:10:11.172 } 00:10:11.172 ] 00:10:11.172 }' 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.172 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.433 [2024-11-18 23:05:30.679036] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.433 [2024-11-18 23:05:30.679120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.433 [2024-11-18 23:05:30.691067] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.433 [2024-11-18 23:05:30.691144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.433 [2024-11-18 23:05:30.691169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.433 [2024-11-18 23:05:30.691191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.433 [2024-11-18 23:05:30.691215] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.433 [2024-11-18 23:05:30.691235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.433 [2024-11-18 23:05:30.691252] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:11.433 [2024-11-18 23:05:30.691271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.433 [2024-11-18 23:05:30.711700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.433 BaseBdev1 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.433 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.434 [ 00:10:11.434 { 00:10:11.434 "name": "BaseBdev1", 00:10:11.434 "aliases": [ 00:10:11.434 "c017d140-b766-476e-8071-d919471fe9fb" 00:10:11.434 ], 00:10:11.434 "product_name": "Malloc disk", 00:10:11.434 "block_size": 512, 00:10:11.434 "num_blocks": 65536, 00:10:11.434 "uuid": "c017d140-b766-476e-8071-d919471fe9fb", 00:10:11.434 "assigned_rate_limits": { 00:10:11.434 "rw_ios_per_sec": 0, 00:10:11.434 "rw_mbytes_per_sec": 0, 00:10:11.434 "r_mbytes_per_sec": 0, 00:10:11.434 "w_mbytes_per_sec": 0 00:10:11.434 }, 00:10:11.434 "claimed": true, 00:10:11.434 "claim_type": "exclusive_write", 00:10:11.434 "zoned": false, 00:10:11.434 "supported_io_types": { 00:10:11.434 "read": true, 00:10:11.434 "write": true, 00:10:11.434 "unmap": true, 00:10:11.434 "flush": true, 00:10:11.434 "reset": true, 00:10:11.434 "nvme_admin": false, 00:10:11.434 "nvme_io": false, 00:10:11.434 "nvme_io_md": false, 00:10:11.434 "write_zeroes": true, 00:10:11.434 "zcopy": true, 00:10:11.434 "get_zone_info": false, 00:10:11.434 "zone_management": false, 00:10:11.434 "zone_append": false, 00:10:11.434 "compare": false, 00:10:11.434 "compare_and_write": false, 00:10:11.434 "abort": true, 00:10:11.434 "seek_hole": false, 00:10:11.434 "seek_data": false, 00:10:11.434 "copy": true, 00:10:11.434 "nvme_iov_md": false 00:10:11.434 }, 00:10:11.434 "memory_domains": [ 00:10:11.434 { 00:10:11.434 "dma_device_id": "system", 00:10:11.434 "dma_device_type": 1 00:10:11.434 }, 00:10:11.434 { 00:10:11.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.434 "dma_device_type": 2 00:10:11.434 } 00:10:11.434 ], 00:10:11.434 "driver_specific": {} 00:10:11.434 } 00:10:11.434 ] 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.434 "name": "Existed_Raid", 00:10:11.434 "uuid": "2c5350f9-19de-4104-be43-719b0a36ba7c", 00:10:11.434 "strip_size_kb": 64, 00:10:11.434 "state": "configuring", 00:10:11.434 "raid_level": "concat", 00:10:11.434 "superblock": true, 00:10:11.434 "num_base_bdevs": 4, 00:10:11.434 "num_base_bdevs_discovered": 1, 00:10:11.434 "num_base_bdevs_operational": 4, 00:10:11.434 "base_bdevs_list": [ 00:10:11.434 { 00:10:11.434 "name": "BaseBdev1", 00:10:11.434 "uuid": "c017d140-b766-476e-8071-d919471fe9fb", 00:10:11.434 "is_configured": true, 00:10:11.434 "data_offset": 2048, 00:10:11.434 "data_size": 63488 00:10:11.434 }, 00:10:11.434 { 00:10:11.434 "name": "BaseBdev2", 00:10:11.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.434 "is_configured": false, 00:10:11.434 "data_offset": 0, 00:10:11.434 "data_size": 0 00:10:11.434 }, 00:10:11.434 { 00:10:11.434 "name": "BaseBdev3", 00:10:11.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.434 "is_configured": false, 00:10:11.434 "data_offset": 0, 00:10:11.434 "data_size": 0 00:10:11.434 }, 00:10:11.434 { 00:10:11.434 "name": "BaseBdev4", 00:10:11.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.434 "is_configured": false, 00:10:11.434 "data_offset": 0, 00:10:11.434 "data_size": 0 00:10:11.434 } 00:10:11.434 ] 00:10:11.434 }' 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.434 23:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.004 [2024-11-18 23:05:31.182892] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.004 [2024-11-18 23:05:31.182999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.004 [2024-11-18 23:05:31.194928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.004 [2024-11-18 23:05:31.196746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.004 [2024-11-18 23:05:31.196787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.004 [2024-11-18 23:05:31.196796] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.004 [2024-11-18 23:05:31.196804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.004 [2024-11-18 23:05:31.196810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.004 [2024-11-18 23:05:31.196818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.004 "name": "Existed_Raid", 00:10:12.004 "uuid": "ab2a3a2f-e50e-4154-9f65-eff3f920ae0c", 00:10:12.004 "strip_size_kb": 64, 00:10:12.004 "state": "configuring", 00:10:12.004 "raid_level": "concat", 00:10:12.004 "superblock": true, 00:10:12.004 "num_base_bdevs": 4, 00:10:12.004 "num_base_bdevs_discovered": 1, 00:10:12.004 "num_base_bdevs_operational": 4, 00:10:12.004 "base_bdevs_list": [ 00:10:12.004 { 00:10:12.004 "name": "BaseBdev1", 00:10:12.004 "uuid": "c017d140-b766-476e-8071-d919471fe9fb", 00:10:12.004 "is_configured": true, 00:10:12.004 "data_offset": 2048, 00:10:12.004 "data_size": 63488 00:10:12.004 }, 00:10:12.004 { 00:10:12.004 "name": "BaseBdev2", 00:10:12.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.004 "is_configured": false, 00:10:12.004 "data_offset": 0, 00:10:12.004 "data_size": 0 00:10:12.004 }, 00:10:12.004 { 00:10:12.004 "name": "BaseBdev3", 00:10:12.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.004 "is_configured": false, 00:10:12.004 "data_offset": 0, 00:10:12.004 "data_size": 0 00:10:12.004 }, 00:10:12.004 { 00:10:12.004 "name": "BaseBdev4", 00:10:12.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.004 "is_configured": false, 00:10:12.004 "data_offset": 0, 00:10:12.004 "data_size": 0 00:10:12.004 } 00:10:12.004 ] 00:10:12.004 }' 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.004 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.264 [2024-11-18 23:05:31.621638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.264 BaseBdev2 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.264 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.525 [ 00:10:12.525 { 00:10:12.525 "name": "BaseBdev2", 00:10:12.525 "aliases": [ 00:10:12.525 "75c826c4-7205-4b65-96d8-c2ebcbf1e484" 00:10:12.525 ], 00:10:12.525 "product_name": "Malloc disk", 00:10:12.525 "block_size": 512, 00:10:12.525 "num_blocks": 65536, 00:10:12.525 "uuid": "75c826c4-7205-4b65-96d8-c2ebcbf1e484", 00:10:12.525 "assigned_rate_limits": { 00:10:12.525 "rw_ios_per_sec": 0, 00:10:12.525 "rw_mbytes_per_sec": 0, 00:10:12.525 "r_mbytes_per_sec": 0, 00:10:12.525 "w_mbytes_per_sec": 0 00:10:12.525 }, 00:10:12.525 "claimed": true, 00:10:12.525 "claim_type": "exclusive_write", 00:10:12.525 "zoned": false, 00:10:12.525 "supported_io_types": { 00:10:12.525 "read": true, 00:10:12.525 "write": true, 00:10:12.525 "unmap": true, 00:10:12.525 "flush": true, 00:10:12.525 "reset": true, 00:10:12.525 "nvme_admin": false, 00:10:12.525 "nvme_io": false, 00:10:12.525 "nvme_io_md": false, 00:10:12.525 "write_zeroes": true, 00:10:12.525 "zcopy": true, 00:10:12.525 "get_zone_info": false, 00:10:12.525 "zone_management": false, 00:10:12.525 "zone_append": false, 00:10:12.525 "compare": false, 00:10:12.525 "compare_and_write": false, 00:10:12.525 "abort": true, 00:10:12.525 "seek_hole": false, 00:10:12.525 "seek_data": false, 00:10:12.525 "copy": true, 00:10:12.525 "nvme_iov_md": false 00:10:12.525 }, 00:10:12.525 "memory_domains": [ 00:10:12.525 { 00:10:12.525 "dma_device_id": "system", 00:10:12.525 "dma_device_type": 1 00:10:12.525 }, 00:10:12.525 { 00:10:12.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.525 "dma_device_type": 2 00:10:12.525 } 00:10:12.525 ], 00:10:12.525 "driver_specific": {} 00:10:12.525 } 00:10:12.525 ] 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.525 "name": "Existed_Raid", 00:10:12.525 "uuid": "ab2a3a2f-e50e-4154-9f65-eff3f920ae0c", 00:10:12.525 "strip_size_kb": 64, 00:10:12.525 "state": "configuring", 00:10:12.525 "raid_level": "concat", 00:10:12.525 "superblock": true, 00:10:12.525 "num_base_bdevs": 4, 00:10:12.525 "num_base_bdevs_discovered": 2, 00:10:12.525 "num_base_bdevs_operational": 4, 00:10:12.525 "base_bdevs_list": [ 00:10:12.525 { 00:10:12.525 "name": "BaseBdev1", 00:10:12.525 "uuid": "c017d140-b766-476e-8071-d919471fe9fb", 00:10:12.525 "is_configured": true, 00:10:12.525 "data_offset": 2048, 00:10:12.525 "data_size": 63488 00:10:12.525 }, 00:10:12.525 { 00:10:12.525 "name": "BaseBdev2", 00:10:12.525 "uuid": "75c826c4-7205-4b65-96d8-c2ebcbf1e484", 00:10:12.525 "is_configured": true, 00:10:12.525 "data_offset": 2048, 00:10:12.525 "data_size": 63488 00:10:12.525 }, 00:10:12.525 { 00:10:12.525 "name": "BaseBdev3", 00:10:12.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.525 "is_configured": false, 00:10:12.525 "data_offset": 0, 00:10:12.525 "data_size": 0 00:10:12.525 }, 00:10:12.525 { 00:10:12.525 "name": "BaseBdev4", 00:10:12.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.525 "is_configured": false, 00:10:12.525 "data_offset": 0, 00:10:12.525 "data_size": 0 00:10:12.525 } 00:10:12.525 ] 00:10:12.525 }' 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.525 23:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.786 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.786 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.786 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.786 BaseBdev3 00:10:12.786 [2024-11-18 23:05:32.119808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.786 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.786 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:12.786 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:12.786 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:12.786 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:12.786 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:12.786 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.787 [ 00:10:12.787 { 00:10:12.787 "name": "BaseBdev3", 00:10:12.787 "aliases": [ 00:10:12.787 "7452631f-edb3-43fc-960d-0f664285ddf7" 00:10:12.787 ], 00:10:12.787 "product_name": "Malloc disk", 00:10:12.787 "block_size": 512, 00:10:12.787 "num_blocks": 65536, 00:10:12.787 "uuid": "7452631f-edb3-43fc-960d-0f664285ddf7", 00:10:12.787 "assigned_rate_limits": { 00:10:12.787 "rw_ios_per_sec": 0, 00:10:12.787 "rw_mbytes_per_sec": 0, 00:10:12.787 "r_mbytes_per_sec": 0, 00:10:12.787 "w_mbytes_per_sec": 0 00:10:12.787 }, 00:10:12.787 "claimed": true, 00:10:12.787 "claim_type": "exclusive_write", 00:10:12.787 "zoned": false, 00:10:12.787 "supported_io_types": { 00:10:12.787 "read": true, 00:10:12.787 "write": true, 00:10:12.787 "unmap": true, 00:10:12.787 "flush": true, 00:10:12.787 "reset": true, 00:10:12.787 "nvme_admin": false, 00:10:12.787 "nvme_io": false, 00:10:12.787 "nvme_io_md": false, 00:10:12.787 "write_zeroes": true, 00:10:12.787 "zcopy": true, 00:10:12.787 "get_zone_info": false, 00:10:12.787 "zone_management": false, 00:10:12.787 "zone_append": false, 00:10:12.787 "compare": false, 00:10:12.787 "compare_and_write": false, 00:10:12.787 "abort": true, 00:10:12.787 "seek_hole": false, 00:10:12.787 "seek_data": false, 00:10:12.787 "copy": true, 00:10:12.787 "nvme_iov_md": false 00:10:12.787 }, 00:10:12.787 "memory_domains": [ 00:10:12.787 { 00:10:12.787 "dma_device_id": "system", 00:10:12.787 "dma_device_type": 1 00:10:12.787 }, 00:10:12.787 { 00:10:12.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.787 "dma_device_type": 2 00:10:12.787 } 00:10:12.787 ], 00:10:12.787 "driver_specific": {} 00:10:12.787 } 00:10:12.787 ] 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.787 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.047 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.047 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.047 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.047 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.047 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.047 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.047 "name": "Existed_Raid", 00:10:13.047 "uuid": "ab2a3a2f-e50e-4154-9f65-eff3f920ae0c", 00:10:13.047 "strip_size_kb": 64, 00:10:13.047 "state": "configuring", 00:10:13.047 "raid_level": "concat", 00:10:13.047 "superblock": true, 00:10:13.047 "num_base_bdevs": 4, 00:10:13.047 "num_base_bdevs_discovered": 3, 00:10:13.047 "num_base_bdevs_operational": 4, 00:10:13.047 "base_bdevs_list": [ 00:10:13.047 { 00:10:13.047 "name": "BaseBdev1", 00:10:13.047 "uuid": "c017d140-b766-476e-8071-d919471fe9fb", 00:10:13.047 "is_configured": true, 00:10:13.047 "data_offset": 2048, 00:10:13.047 "data_size": 63488 00:10:13.047 }, 00:10:13.047 { 00:10:13.047 "name": "BaseBdev2", 00:10:13.047 "uuid": "75c826c4-7205-4b65-96d8-c2ebcbf1e484", 00:10:13.047 "is_configured": true, 00:10:13.047 "data_offset": 2048, 00:10:13.047 "data_size": 63488 00:10:13.047 }, 00:10:13.047 { 00:10:13.047 "name": "BaseBdev3", 00:10:13.047 "uuid": "7452631f-edb3-43fc-960d-0f664285ddf7", 00:10:13.047 "is_configured": true, 00:10:13.047 "data_offset": 2048, 00:10:13.047 "data_size": 63488 00:10:13.047 }, 00:10:13.047 { 00:10:13.047 "name": "BaseBdev4", 00:10:13.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.047 "is_configured": false, 00:10:13.047 "data_offset": 0, 00:10:13.047 "data_size": 0 00:10:13.047 } 00:10:13.047 ] 00:10:13.047 }' 00:10:13.047 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.047 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.307 BaseBdev4 00:10:13.307 [2024-11-18 23:05:32.641912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:13.307 [2024-11-18 23:05:32.642119] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:13.307 [2024-11-18 23:05:32.642134] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:13.307 [2024-11-18 23:05:32.642418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:13.307 [2024-11-18 23:05:32.642544] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:13.307 [2024-11-18 23:05:32.642557] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:13.307 [2024-11-18 23:05:32.642680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.307 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.308 [ 00:10:13.308 { 00:10:13.308 "name": "BaseBdev4", 00:10:13.308 "aliases": [ 00:10:13.308 "562acc7c-0abe-4d0c-9c96-4c274e2e0fca" 00:10:13.308 ], 00:10:13.308 "product_name": "Malloc disk", 00:10:13.308 "block_size": 512, 00:10:13.308 "num_blocks": 65536, 00:10:13.308 "uuid": "562acc7c-0abe-4d0c-9c96-4c274e2e0fca", 00:10:13.308 "assigned_rate_limits": { 00:10:13.308 "rw_ios_per_sec": 0, 00:10:13.308 "rw_mbytes_per_sec": 0, 00:10:13.308 "r_mbytes_per_sec": 0, 00:10:13.308 "w_mbytes_per_sec": 0 00:10:13.308 }, 00:10:13.308 "claimed": true, 00:10:13.308 "claim_type": "exclusive_write", 00:10:13.308 "zoned": false, 00:10:13.308 "supported_io_types": { 00:10:13.308 "read": true, 00:10:13.308 "write": true, 00:10:13.308 "unmap": true, 00:10:13.308 "flush": true, 00:10:13.308 "reset": true, 00:10:13.308 "nvme_admin": false, 00:10:13.308 "nvme_io": false, 00:10:13.308 "nvme_io_md": false, 00:10:13.308 "write_zeroes": true, 00:10:13.308 "zcopy": true, 00:10:13.308 "get_zone_info": false, 00:10:13.308 "zone_management": false, 00:10:13.308 "zone_append": false, 00:10:13.308 "compare": false, 00:10:13.308 "compare_and_write": false, 00:10:13.308 "abort": true, 00:10:13.308 "seek_hole": false, 00:10:13.308 "seek_data": false, 00:10:13.308 "copy": true, 00:10:13.308 "nvme_iov_md": false 00:10:13.308 }, 00:10:13.308 "memory_domains": [ 00:10:13.308 { 00:10:13.308 "dma_device_id": "system", 00:10:13.308 "dma_device_type": 1 00:10:13.308 }, 00:10:13.308 { 00:10:13.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.308 "dma_device_type": 2 00:10:13.308 } 00:10:13.308 ], 00:10:13.308 "driver_specific": {} 00:10:13.308 } 00:10:13.308 ] 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.308 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.568 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.568 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.568 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.568 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.568 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.568 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.568 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.568 "name": "Existed_Raid", 00:10:13.568 "uuid": "ab2a3a2f-e50e-4154-9f65-eff3f920ae0c", 00:10:13.568 "strip_size_kb": 64, 00:10:13.568 "state": "online", 00:10:13.568 "raid_level": "concat", 00:10:13.568 "superblock": true, 00:10:13.568 "num_base_bdevs": 4, 00:10:13.568 "num_base_bdevs_discovered": 4, 00:10:13.568 "num_base_bdevs_operational": 4, 00:10:13.568 "base_bdevs_list": [ 00:10:13.568 { 00:10:13.568 "name": "BaseBdev1", 00:10:13.568 "uuid": "c017d140-b766-476e-8071-d919471fe9fb", 00:10:13.568 "is_configured": true, 00:10:13.568 "data_offset": 2048, 00:10:13.568 "data_size": 63488 00:10:13.568 }, 00:10:13.568 { 00:10:13.568 "name": "BaseBdev2", 00:10:13.568 "uuid": "75c826c4-7205-4b65-96d8-c2ebcbf1e484", 00:10:13.568 "is_configured": true, 00:10:13.568 "data_offset": 2048, 00:10:13.568 "data_size": 63488 00:10:13.568 }, 00:10:13.568 { 00:10:13.568 "name": "BaseBdev3", 00:10:13.568 "uuid": "7452631f-edb3-43fc-960d-0f664285ddf7", 00:10:13.568 "is_configured": true, 00:10:13.568 "data_offset": 2048, 00:10:13.568 "data_size": 63488 00:10:13.568 }, 00:10:13.568 { 00:10:13.568 "name": "BaseBdev4", 00:10:13.568 "uuid": "562acc7c-0abe-4d0c-9c96-4c274e2e0fca", 00:10:13.568 "is_configured": true, 00:10:13.568 "data_offset": 2048, 00:10:13.568 "data_size": 63488 00:10:13.568 } 00:10:13.568 ] 00:10:13.568 }' 00:10:13.568 23:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.568 23:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.828 [2024-11-18 23:05:33.109499] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.828 "name": "Existed_Raid", 00:10:13.828 "aliases": [ 00:10:13.828 "ab2a3a2f-e50e-4154-9f65-eff3f920ae0c" 00:10:13.828 ], 00:10:13.828 "product_name": "Raid Volume", 00:10:13.828 "block_size": 512, 00:10:13.828 "num_blocks": 253952, 00:10:13.828 "uuid": "ab2a3a2f-e50e-4154-9f65-eff3f920ae0c", 00:10:13.828 "assigned_rate_limits": { 00:10:13.828 "rw_ios_per_sec": 0, 00:10:13.828 "rw_mbytes_per_sec": 0, 00:10:13.828 "r_mbytes_per_sec": 0, 00:10:13.828 "w_mbytes_per_sec": 0 00:10:13.828 }, 00:10:13.828 "claimed": false, 00:10:13.828 "zoned": false, 00:10:13.828 "supported_io_types": { 00:10:13.828 "read": true, 00:10:13.828 "write": true, 00:10:13.828 "unmap": true, 00:10:13.828 "flush": true, 00:10:13.828 "reset": true, 00:10:13.828 "nvme_admin": false, 00:10:13.828 "nvme_io": false, 00:10:13.828 "nvme_io_md": false, 00:10:13.828 "write_zeroes": true, 00:10:13.828 "zcopy": false, 00:10:13.828 "get_zone_info": false, 00:10:13.828 "zone_management": false, 00:10:13.828 "zone_append": false, 00:10:13.828 "compare": false, 00:10:13.828 "compare_and_write": false, 00:10:13.828 "abort": false, 00:10:13.828 "seek_hole": false, 00:10:13.828 "seek_data": false, 00:10:13.828 "copy": false, 00:10:13.828 "nvme_iov_md": false 00:10:13.828 }, 00:10:13.828 "memory_domains": [ 00:10:13.828 { 00:10:13.828 "dma_device_id": "system", 00:10:13.828 "dma_device_type": 1 00:10:13.828 }, 00:10:13.828 { 00:10:13.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.828 "dma_device_type": 2 00:10:13.828 }, 00:10:13.828 { 00:10:13.828 "dma_device_id": "system", 00:10:13.828 "dma_device_type": 1 00:10:13.828 }, 00:10:13.828 { 00:10:13.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.828 "dma_device_type": 2 00:10:13.828 }, 00:10:13.828 { 00:10:13.828 "dma_device_id": "system", 00:10:13.828 "dma_device_type": 1 00:10:13.828 }, 00:10:13.828 { 00:10:13.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.828 "dma_device_type": 2 00:10:13.828 }, 00:10:13.828 { 00:10:13.828 "dma_device_id": "system", 00:10:13.828 "dma_device_type": 1 00:10:13.828 }, 00:10:13.828 { 00:10:13.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.828 "dma_device_type": 2 00:10:13.828 } 00:10:13.828 ], 00:10:13.828 "driver_specific": { 00:10:13.828 "raid": { 00:10:13.828 "uuid": "ab2a3a2f-e50e-4154-9f65-eff3f920ae0c", 00:10:13.828 "strip_size_kb": 64, 00:10:13.828 "state": "online", 00:10:13.828 "raid_level": "concat", 00:10:13.828 "superblock": true, 00:10:13.828 "num_base_bdevs": 4, 00:10:13.828 "num_base_bdevs_discovered": 4, 00:10:13.828 "num_base_bdevs_operational": 4, 00:10:13.828 "base_bdevs_list": [ 00:10:13.828 { 00:10:13.828 "name": "BaseBdev1", 00:10:13.828 "uuid": "c017d140-b766-476e-8071-d919471fe9fb", 00:10:13.828 "is_configured": true, 00:10:13.828 "data_offset": 2048, 00:10:13.828 "data_size": 63488 00:10:13.828 }, 00:10:13.828 { 00:10:13.828 "name": "BaseBdev2", 00:10:13.828 "uuid": "75c826c4-7205-4b65-96d8-c2ebcbf1e484", 00:10:13.828 "is_configured": true, 00:10:13.828 "data_offset": 2048, 00:10:13.828 "data_size": 63488 00:10:13.828 }, 00:10:13.828 { 00:10:13.828 "name": "BaseBdev3", 00:10:13.828 "uuid": "7452631f-edb3-43fc-960d-0f664285ddf7", 00:10:13.828 "is_configured": true, 00:10:13.828 "data_offset": 2048, 00:10:13.828 "data_size": 63488 00:10:13.828 }, 00:10:13.828 { 00:10:13.828 "name": "BaseBdev4", 00:10:13.828 "uuid": "562acc7c-0abe-4d0c-9c96-4c274e2e0fca", 00:10:13.828 "is_configured": true, 00:10:13.828 "data_offset": 2048, 00:10:13.828 "data_size": 63488 00:10:13.828 } 00:10:13.828 ] 00:10:13.828 } 00:10:13.828 } 00:10:13.828 }' 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:13.828 BaseBdev2 00:10:13.828 BaseBdev3 00:10:13.828 BaseBdev4' 00:10:13.828 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.088 [2024-11-18 23:05:33.448603] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.088 [2024-11-18 23:05:33.448633] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.088 [2024-11-18 23:05:33.448688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:14.088 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.348 "name": "Existed_Raid", 00:10:14.348 "uuid": "ab2a3a2f-e50e-4154-9f65-eff3f920ae0c", 00:10:14.348 "strip_size_kb": 64, 00:10:14.348 "state": "offline", 00:10:14.348 "raid_level": "concat", 00:10:14.348 "superblock": true, 00:10:14.348 "num_base_bdevs": 4, 00:10:14.348 "num_base_bdevs_discovered": 3, 00:10:14.348 "num_base_bdevs_operational": 3, 00:10:14.348 "base_bdevs_list": [ 00:10:14.348 { 00:10:14.348 "name": null, 00:10:14.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.348 "is_configured": false, 00:10:14.348 "data_offset": 0, 00:10:14.348 "data_size": 63488 00:10:14.348 }, 00:10:14.348 { 00:10:14.348 "name": "BaseBdev2", 00:10:14.348 "uuid": "75c826c4-7205-4b65-96d8-c2ebcbf1e484", 00:10:14.348 "is_configured": true, 00:10:14.348 "data_offset": 2048, 00:10:14.348 "data_size": 63488 00:10:14.348 }, 00:10:14.348 { 00:10:14.348 "name": "BaseBdev3", 00:10:14.348 "uuid": "7452631f-edb3-43fc-960d-0f664285ddf7", 00:10:14.348 "is_configured": true, 00:10:14.348 "data_offset": 2048, 00:10:14.348 "data_size": 63488 00:10:14.348 }, 00:10:14.348 { 00:10:14.348 "name": "BaseBdev4", 00:10:14.348 "uuid": "562acc7c-0abe-4d0c-9c96-4c274e2e0fca", 00:10:14.348 "is_configured": true, 00:10:14.348 "data_offset": 2048, 00:10:14.348 "data_size": 63488 00:10:14.348 } 00:10:14.348 ] 00:10:14.348 }' 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.348 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.608 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:14.608 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.608 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.608 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.608 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.608 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.608 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.609 [2024-11-18 23:05:33.947265] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.609 23:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.869 [2024-11-18 23:05:34.014468] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.869 [2024-11-18 23:05:34.081357] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:14.869 [2024-11-18 23:05:34.081446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.869 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.870 BaseBdev2 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.870 [ 00:10:14.870 { 00:10:14.870 "name": "BaseBdev2", 00:10:14.870 "aliases": [ 00:10:14.870 "b9349b01-a4d3-47f7-8080-f99bf2ca90f9" 00:10:14.870 ], 00:10:14.870 "product_name": "Malloc disk", 00:10:14.870 "block_size": 512, 00:10:14.870 "num_blocks": 65536, 00:10:14.870 "uuid": "b9349b01-a4d3-47f7-8080-f99bf2ca90f9", 00:10:14.870 "assigned_rate_limits": { 00:10:14.870 "rw_ios_per_sec": 0, 00:10:14.870 "rw_mbytes_per_sec": 0, 00:10:14.870 "r_mbytes_per_sec": 0, 00:10:14.870 "w_mbytes_per_sec": 0 00:10:14.870 }, 00:10:14.870 "claimed": false, 00:10:14.870 "zoned": false, 00:10:14.870 "supported_io_types": { 00:10:14.870 "read": true, 00:10:14.870 "write": true, 00:10:14.870 "unmap": true, 00:10:14.870 "flush": true, 00:10:14.870 "reset": true, 00:10:14.870 "nvme_admin": false, 00:10:14.870 "nvme_io": false, 00:10:14.870 "nvme_io_md": false, 00:10:14.870 "write_zeroes": true, 00:10:14.870 "zcopy": true, 00:10:14.870 "get_zone_info": false, 00:10:14.870 "zone_management": false, 00:10:14.870 "zone_append": false, 00:10:14.870 "compare": false, 00:10:14.870 "compare_and_write": false, 00:10:14.870 "abort": true, 00:10:14.870 "seek_hole": false, 00:10:14.870 "seek_data": false, 00:10:14.870 "copy": true, 00:10:14.870 "nvme_iov_md": false 00:10:14.870 }, 00:10:14.870 "memory_domains": [ 00:10:14.870 { 00:10:14.870 "dma_device_id": "system", 00:10:14.870 "dma_device_type": 1 00:10:14.870 }, 00:10:14.870 { 00:10:14.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.870 "dma_device_type": 2 00:10:14.870 } 00:10:14.870 ], 00:10:14.870 "driver_specific": {} 00:10:14.870 } 00:10:14.870 ] 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.870 BaseBdev3 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.870 [ 00:10:14.870 { 00:10:14.870 "name": "BaseBdev3", 00:10:14.870 "aliases": [ 00:10:14.870 "c20f1282-8e21-4938-92f3-99b55d5855a9" 00:10:14.870 ], 00:10:14.870 "product_name": "Malloc disk", 00:10:14.870 "block_size": 512, 00:10:14.870 "num_blocks": 65536, 00:10:14.870 "uuid": "c20f1282-8e21-4938-92f3-99b55d5855a9", 00:10:14.870 "assigned_rate_limits": { 00:10:14.870 "rw_ios_per_sec": 0, 00:10:14.870 "rw_mbytes_per_sec": 0, 00:10:14.870 "r_mbytes_per_sec": 0, 00:10:14.870 "w_mbytes_per_sec": 0 00:10:14.870 }, 00:10:14.870 "claimed": false, 00:10:14.870 "zoned": false, 00:10:14.870 "supported_io_types": { 00:10:14.870 "read": true, 00:10:14.870 "write": true, 00:10:14.870 "unmap": true, 00:10:14.870 "flush": true, 00:10:14.870 "reset": true, 00:10:14.870 "nvme_admin": false, 00:10:14.870 "nvme_io": false, 00:10:14.870 "nvme_io_md": false, 00:10:14.870 "write_zeroes": true, 00:10:14.870 "zcopy": true, 00:10:14.870 "get_zone_info": false, 00:10:14.870 "zone_management": false, 00:10:14.870 "zone_append": false, 00:10:14.870 "compare": false, 00:10:14.870 "compare_and_write": false, 00:10:14.870 "abort": true, 00:10:14.870 "seek_hole": false, 00:10:14.870 "seek_data": false, 00:10:14.870 "copy": true, 00:10:14.870 "nvme_iov_md": false 00:10:14.870 }, 00:10:14.870 "memory_domains": [ 00:10:14.870 { 00:10:14.870 "dma_device_id": "system", 00:10:14.870 "dma_device_type": 1 00:10:14.870 }, 00:10:14.870 { 00:10:14.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.870 "dma_device_type": 2 00:10:14.870 } 00:10:14.870 ], 00:10:14.870 "driver_specific": {} 00:10:14.870 } 00:10:14.870 ] 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:14.870 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.183 BaseBdev4 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.183 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.183 [ 00:10:15.183 { 00:10:15.183 "name": "BaseBdev4", 00:10:15.183 "aliases": [ 00:10:15.183 "4d993c01-6eb3-480a-be98-0d1cf05f66c3" 00:10:15.183 ], 00:10:15.183 "product_name": "Malloc disk", 00:10:15.183 "block_size": 512, 00:10:15.183 "num_blocks": 65536, 00:10:15.183 "uuid": "4d993c01-6eb3-480a-be98-0d1cf05f66c3", 00:10:15.183 "assigned_rate_limits": { 00:10:15.183 "rw_ios_per_sec": 0, 00:10:15.183 "rw_mbytes_per_sec": 0, 00:10:15.183 "r_mbytes_per_sec": 0, 00:10:15.183 "w_mbytes_per_sec": 0 00:10:15.183 }, 00:10:15.183 "claimed": false, 00:10:15.183 "zoned": false, 00:10:15.183 "supported_io_types": { 00:10:15.183 "read": true, 00:10:15.183 "write": true, 00:10:15.183 "unmap": true, 00:10:15.183 "flush": true, 00:10:15.183 "reset": true, 00:10:15.183 "nvme_admin": false, 00:10:15.183 "nvme_io": false, 00:10:15.183 "nvme_io_md": false, 00:10:15.183 "write_zeroes": true, 00:10:15.183 "zcopy": true, 00:10:15.183 "get_zone_info": false, 00:10:15.184 "zone_management": false, 00:10:15.184 "zone_append": false, 00:10:15.184 "compare": false, 00:10:15.184 "compare_and_write": false, 00:10:15.184 "abort": true, 00:10:15.184 "seek_hole": false, 00:10:15.184 "seek_data": false, 00:10:15.184 "copy": true, 00:10:15.184 "nvme_iov_md": false 00:10:15.184 }, 00:10:15.184 "memory_domains": [ 00:10:15.184 { 00:10:15.184 "dma_device_id": "system", 00:10:15.184 "dma_device_type": 1 00:10:15.184 }, 00:10:15.184 { 00:10:15.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.184 "dma_device_type": 2 00:10:15.184 } 00:10:15.184 ], 00:10:15.184 "driver_specific": {} 00:10:15.184 } 00:10:15.184 ] 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.184 [2024-11-18 23:05:34.308405] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.184 [2024-11-18 23:05:34.308513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.184 [2024-11-18 23:05:34.308555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.184 [2024-11-18 23:05:34.310369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.184 [2024-11-18 23:05:34.310454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.184 "name": "Existed_Raid", 00:10:15.184 "uuid": "fe8acf88-2a63-4310-9394-36a54254394e", 00:10:15.184 "strip_size_kb": 64, 00:10:15.184 "state": "configuring", 00:10:15.184 "raid_level": "concat", 00:10:15.184 "superblock": true, 00:10:15.184 "num_base_bdevs": 4, 00:10:15.184 "num_base_bdevs_discovered": 3, 00:10:15.184 "num_base_bdevs_operational": 4, 00:10:15.184 "base_bdevs_list": [ 00:10:15.184 { 00:10:15.184 "name": "BaseBdev1", 00:10:15.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.184 "is_configured": false, 00:10:15.184 "data_offset": 0, 00:10:15.184 "data_size": 0 00:10:15.184 }, 00:10:15.184 { 00:10:15.184 "name": "BaseBdev2", 00:10:15.184 "uuid": "b9349b01-a4d3-47f7-8080-f99bf2ca90f9", 00:10:15.184 "is_configured": true, 00:10:15.184 "data_offset": 2048, 00:10:15.184 "data_size": 63488 00:10:15.184 }, 00:10:15.184 { 00:10:15.184 "name": "BaseBdev3", 00:10:15.184 "uuid": "c20f1282-8e21-4938-92f3-99b55d5855a9", 00:10:15.184 "is_configured": true, 00:10:15.184 "data_offset": 2048, 00:10:15.184 "data_size": 63488 00:10:15.184 }, 00:10:15.184 { 00:10:15.184 "name": "BaseBdev4", 00:10:15.184 "uuid": "4d993c01-6eb3-480a-be98-0d1cf05f66c3", 00:10:15.184 "is_configured": true, 00:10:15.184 "data_offset": 2048, 00:10:15.184 "data_size": 63488 00:10:15.184 } 00:10:15.184 ] 00:10:15.184 }' 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.184 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.445 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:15.445 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.445 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.446 [2024-11-18 23:05:34.775551] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.446 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.718 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.718 "name": "Existed_Raid", 00:10:15.718 "uuid": "fe8acf88-2a63-4310-9394-36a54254394e", 00:10:15.718 "strip_size_kb": 64, 00:10:15.718 "state": "configuring", 00:10:15.718 "raid_level": "concat", 00:10:15.718 "superblock": true, 00:10:15.718 "num_base_bdevs": 4, 00:10:15.718 "num_base_bdevs_discovered": 2, 00:10:15.718 "num_base_bdevs_operational": 4, 00:10:15.718 "base_bdevs_list": [ 00:10:15.718 { 00:10:15.718 "name": "BaseBdev1", 00:10:15.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.718 "is_configured": false, 00:10:15.718 "data_offset": 0, 00:10:15.718 "data_size": 0 00:10:15.718 }, 00:10:15.718 { 00:10:15.718 "name": null, 00:10:15.718 "uuid": "b9349b01-a4d3-47f7-8080-f99bf2ca90f9", 00:10:15.718 "is_configured": false, 00:10:15.718 "data_offset": 0, 00:10:15.718 "data_size": 63488 00:10:15.718 }, 00:10:15.718 { 00:10:15.718 "name": "BaseBdev3", 00:10:15.718 "uuid": "c20f1282-8e21-4938-92f3-99b55d5855a9", 00:10:15.718 "is_configured": true, 00:10:15.718 "data_offset": 2048, 00:10:15.718 "data_size": 63488 00:10:15.718 }, 00:10:15.718 { 00:10:15.718 "name": "BaseBdev4", 00:10:15.718 "uuid": "4d993c01-6eb3-480a-be98-0d1cf05f66c3", 00:10:15.718 "is_configured": true, 00:10:15.718 "data_offset": 2048, 00:10:15.718 "data_size": 63488 00:10:15.718 } 00:10:15.718 ] 00:10:15.718 }' 00:10:15.718 23:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.718 23:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.996 [2024-11-18 23:05:35.301550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.996 BaseBdev1 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.996 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.997 [ 00:10:15.997 { 00:10:15.997 "name": "BaseBdev1", 00:10:15.997 "aliases": [ 00:10:15.997 "e7375f7b-aa68-4721-9b95-b4df1dbab422" 00:10:15.997 ], 00:10:15.997 "product_name": "Malloc disk", 00:10:15.997 "block_size": 512, 00:10:15.997 "num_blocks": 65536, 00:10:15.997 "uuid": "e7375f7b-aa68-4721-9b95-b4df1dbab422", 00:10:15.997 "assigned_rate_limits": { 00:10:15.997 "rw_ios_per_sec": 0, 00:10:15.997 "rw_mbytes_per_sec": 0, 00:10:15.997 "r_mbytes_per_sec": 0, 00:10:15.997 "w_mbytes_per_sec": 0 00:10:15.997 }, 00:10:15.997 "claimed": true, 00:10:15.997 "claim_type": "exclusive_write", 00:10:15.997 "zoned": false, 00:10:15.997 "supported_io_types": { 00:10:15.997 "read": true, 00:10:15.997 "write": true, 00:10:15.997 "unmap": true, 00:10:15.997 "flush": true, 00:10:15.997 "reset": true, 00:10:15.997 "nvme_admin": false, 00:10:15.997 "nvme_io": false, 00:10:15.997 "nvme_io_md": false, 00:10:15.997 "write_zeroes": true, 00:10:15.997 "zcopy": true, 00:10:15.997 "get_zone_info": false, 00:10:15.997 "zone_management": false, 00:10:15.997 "zone_append": false, 00:10:15.997 "compare": false, 00:10:15.997 "compare_and_write": false, 00:10:15.997 "abort": true, 00:10:15.997 "seek_hole": false, 00:10:15.997 "seek_data": false, 00:10:15.997 "copy": true, 00:10:15.997 "nvme_iov_md": false 00:10:15.997 }, 00:10:15.997 "memory_domains": [ 00:10:15.997 { 00:10:15.997 "dma_device_id": "system", 00:10:15.997 "dma_device_type": 1 00:10:15.997 }, 00:10:15.997 { 00:10:15.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.997 "dma_device_type": 2 00:10:15.997 } 00:10:15.997 ], 00:10:15.997 "driver_specific": {} 00:10:15.997 } 00:10:15.997 ] 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.997 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.256 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.257 "name": "Existed_Raid", 00:10:16.257 "uuid": "fe8acf88-2a63-4310-9394-36a54254394e", 00:10:16.257 "strip_size_kb": 64, 00:10:16.257 "state": "configuring", 00:10:16.257 "raid_level": "concat", 00:10:16.257 "superblock": true, 00:10:16.257 "num_base_bdevs": 4, 00:10:16.257 "num_base_bdevs_discovered": 3, 00:10:16.257 "num_base_bdevs_operational": 4, 00:10:16.257 "base_bdevs_list": [ 00:10:16.257 { 00:10:16.257 "name": "BaseBdev1", 00:10:16.257 "uuid": "e7375f7b-aa68-4721-9b95-b4df1dbab422", 00:10:16.257 "is_configured": true, 00:10:16.257 "data_offset": 2048, 00:10:16.257 "data_size": 63488 00:10:16.257 }, 00:10:16.257 { 00:10:16.257 "name": null, 00:10:16.257 "uuid": "b9349b01-a4d3-47f7-8080-f99bf2ca90f9", 00:10:16.257 "is_configured": false, 00:10:16.257 "data_offset": 0, 00:10:16.257 "data_size": 63488 00:10:16.257 }, 00:10:16.257 { 00:10:16.257 "name": "BaseBdev3", 00:10:16.257 "uuid": "c20f1282-8e21-4938-92f3-99b55d5855a9", 00:10:16.257 "is_configured": true, 00:10:16.257 "data_offset": 2048, 00:10:16.257 "data_size": 63488 00:10:16.257 }, 00:10:16.257 { 00:10:16.257 "name": "BaseBdev4", 00:10:16.257 "uuid": "4d993c01-6eb3-480a-be98-0d1cf05f66c3", 00:10:16.257 "is_configured": true, 00:10:16.257 "data_offset": 2048, 00:10:16.257 "data_size": 63488 00:10:16.257 } 00:10:16.257 ] 00:10:16.257 }' 00:10:16.257 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.257 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.517 [2024-11-18 23:05:35.800712] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.517 "name": "Existed_Raid", 00:10:16.517 "uuid": "fe8acf88-2a63-4310-9394-36a54254394e", 00:10:16.517 "strip_size_kb": 64, 00:10:16.517 "state": "configuring", 00:10:16.517 "raid_level": "concat", 00:10:16.517 "superblock": true, 00:10:16.517 "num_base_bdevs": 4, 00:10:16.517 "num_base_bdevs_discovered": 2, 00:10:16.517 "num_base_bdevs_operational": 4, 00:10:16.517 "base_bdevs_list": [ 00:10:16.517 { 00:10:16.517 "name": "BaseBdev1", 00:10:16.517 "uuid": "e7375f7b-aa68-4721-9b95-b4df1dbab422", 00:10:16.517 "is_configured": true, 00:10:16.517 "data_offset": 2048, 00:10:16.517 "data_size": 63488 00:10:16.517 }, 00:10:16.517 { 00:10:16.517 "name": null, 00:10:16.517 "uuid": "b9349b01-a4d3-47f7-8080-f99bf2ca90f9", 00:10:16.517 "is_configured": false, 00:10:16.517 "data_offset": 0, 00:10:16.517 "data_size": 63488 00:10:16.517 }, 00:10:16.517 { 00:10:16.517 "name": null, 00:10:16.517 "uuid": "c20f1282-8e21-4938-92f3-99b55d5855a9", 00:10:16.517 "is_configured": false, 00:10:16.517 "data_offset": 0, 00:10:16.517 "data_size": 63488 00:10:16.517 }, 00:10:16.517 { 00:10:16.517 "name": "BaseBdev4", 00:10:16.517 "uuid": "4d993c01-6eb3-480a-be98-0d1cf05f66c3", 00:10:16.517 "is_configured": true, 00:10:16.517 "data_offset": 2048, 00:10:16.517 "data_size": 63488 00:10:16.517 } 00:10:16.517 ] 00:10:16.517 }' 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.517 23:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.109 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.109 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.109 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.109 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.109 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.109 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:17.109 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:17.109 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.110 [2024-11-18 23:05:36.228045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.110 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.111 "name": "Existed_Raid", 00:10:17.111 "uuid": "fe8acf88-2a63-4310-9394-36a54254394e", 00:10:17.111 "strip_size_kb": 64, 00:10:17.111 "state": "configuring", 00:10:17.111 "raid_level": "concat", 00:10:17.111 "superblock": true, 00:10:17.111 "num_base_bdevs": 4, 00:10:17.111 "num_base_bdevs_discovered": 3, 00:10:17.111 "num_base_bdevs_operational": 4, 00:10:17.111 "base_bdevs_list": [ 00:10:17.111 { 00:10:17.111 "name": "BaseBdev1", 00:10:17.111 "uuid": "e7375f7b-aa68-4721-9b95-b4df1dbab422", 00:10:17.111 "is_configured": true, 00:10:17.111 "data_offset": 2048, 00:10:17.111 "data_size": 63488 00:10:17.111 }, 00:10:17.111 { 00:10:17.111 "name": null, 00:10:17.111 "uuid": "b9349b01-a4d3-47f7-8080-f99bf2ca90f9", 00:10:17.111 "is_configured": false, 00:10:17.111 "data_offset": 0, 00:10:17.111 "data_size": 63488 00:10:17.111 }, 00:10:17.111 { 00:10:17.111 "name": "BaseBdev3", 00:10:17.111 "uuid": "c20f1282-8e21-4938-92f3-99b55d5855a9", 00:10:17.112 "is_configured": true, 00:10:17.112 "data_offset": 2048, 00:10:17.112 "data_size": 63488 00:10:17.112 }, 00:10:17.112 { 00:10:17.112 "name": "BaseBdev4", 00:10:17.112 "uuid": "4d993c01-6eb3-480a-be98-0d1cf05f66c3", 00:10:17.112 "is_configured": true, 00:10:17.112 "data_offset": 2048, 00:10:17.112 "data_size": 63488 00:10:17.112 } 00:10:17.112 ] 00:10:17.112 }' 00:10:17.112 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.112 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.375 [2024-11-18 23:05:36.687342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.375 "name": "Existed_Raid", 00:10:17.375 "uuid": "fe8acf88-2a63-4310-9394-36a54254394e", 00:10:17.375 "strip_size_kb": 64, 00:10:17.375 "state": "configuring", 00:10:17.375 "raid_level": "concat", 00:10:17.375 "superblock": true, 00:10:17.375 "num_base_bdevs": 4, 00:10:17.375 "num_base_bdevs_discovered": 2, 00:10:17.375 "num_base_bdevs_operational": 4, 00:10:17.375 "base_bdevs_list": [ 00:10:17.375 { 00:10:17.375 "name": null, 00:10:17.375 "uuid": "e7375f7b-aa68-4721-9b95-b4df1dbab422", 00:10:17.375 "is_configured": false, 00:10:17.375 "data_offset": 0, 00:10:17.375 "data_size": 63488 00:10:17.375 }, 00:10:17.375 { 00:10:17.375 "name": null, 00:10:17.375 "uuid": "b9349b01-a4d3-47f7-8080-f99bf2ca90f9", 00:10:17.375 "is_configured": false, 00:10:17.375 "data_offset": 0, 00:10:17.375 "data_size": 63488 00:10:17.375 }, 00:10:17.375 { 00:10:17.375 "name": "BaseBdev3", 00:10:17.375 "uuid": "c20f1282-8e21-4938-92f3-99b55d5855a9", 00:10:17.375 "is_configured": true, 00:10:17.375 "data_offset": 2048, 00:10:17.375 "data_size": 63488 00:10:17.375 }, 00:10:17.375 { 00:10:17.375 "name": "BaseBdev4", 00:10:17.375 "uuid": "4d993c01-6eb3-480a-be98-0d1cf05f66c3", 00:10:17.375 "is_configured": true, 00:10:17.375 "data_offset": 2048, 00:10:17.375 "data_size": 63488 00:10:17.375 } 00:10:17.375 ] 00:10:17.375 }' 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.375 23:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.945 [2024-11-18 23:05:37.196985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.945 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.945 "name": "Existed_Raid", 00:10:17.946 "uuid": "fe8acf88-2a63-4310-9394-36a54254394e", 00:10:17.946 "strip_size_kb": 64, 00:10:17.946 "state": "configuring", 00:10:17.946 "raid_level": "concat", 00:10:17.946 "superblock": true, 00:10:17.946 "num_base_bdevs": 4, 00:10:17.946 "num_base_bdevs_discovered": 3, 00:10:17.946 "num_base_bdevs_operational": 4, 00:10:17.946 "base_bdevs_list": [ 00:10:17.946 { 00:10:17.946 "name": null, 00:10:17.946 "uuid": "e7375f7b-aa68-4721-9b95-b4df1dbab422", 00:10:17.946 "is_configured": false, 00:10:17.946 "data_offset": 0, 00:10:17.946 "data_size": 63488 00:10:17.946 }, 00:10:17.946 { 00:10:17.946 "name": "BaseBdev2", 00:10:17.946 "uuid": "b9349b01-a4d3-47f7-8080-f99bf2ca90f9", 00:10:17.946 "is_configured": true, 00:10:17.946 "data_offset": 2048, 00:10:17.946 "data_size": 63488 00:10:17.946 }, 00:10:17.946 { 00:10:17.946 "name": "BaseBdev3", 00:10:17.946 "uuid": "c20f1282-8e21-4938-92f3-99b55d5855a9", 00:10:17.946 "is_configured": true, 00:10:17.946 "data_offset": 2048, 00:10:17.946 "data_size": 63488 00:10:17.946 }, 00:10:17.946 { 00:10:17.946 "name": "BaseBdev4", 00:10:17.946 "uuid": "4d993c01-6eb3-480a-be98-0d1cf05f66c3", 00:10:17.946 "is_configured": true, 00:10:17.946 "data_offset": 2048, 00:10:17.946 "data_size": 63488 00:10:17.946 } 00:10:17.946 ] 00:10:17.946 }' 00:10:17.946 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.946 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e7375f7b-aa68-4721-9b95-b4df1dbab422 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.515 [2024-11-18 23:05:37.698960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:18.515 [2024-11-18 23:05:37.699209] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:18.515 [2024-11-18 23:05:37.699263] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:18.515 NewBaseBdev 00:10:18.515 [2024-11-18 23:05:37.699555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:18.515 [2024-11-18 23:05:37.699673] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:18.515 [2024-11-18 23:05:37.699731] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:18.515 [2024-11-18 23:05:37.699862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.515 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.516 [ 00:10:18.516 { 00:10:18.516 "name": "NewBaseBdev", 00:10:18.516 "aliases": [ 00:10:18.516 "e7375f7b-aa68-4721-9b95-b4df1dbab422" 00:10:18.516 ], 00:10:18.516 "product_name": "Malloc disk", 00:10:18.516 "block_size": 512, 00:10:18.516 "num_blocks": 65536, 00:10:18.516 "uuid": "e7375f7b-aa68-4721-9b95-b4df1dbab422", 00:10:18.516 "assigned_rate_limits": { 00:10:18.516 "rw_ios_per_sec": 0, 00:10:18.516 "rw_mbytes_per_sec": 0, 00:10:18.516 "r_mbytes_per_sec": 0, 00:10:18.516 "w_mbytes_per_sec": 0 00:10:18.516 }, 00:10:18.516 "claimed": true, 00:10:18.516 "claim_type": "exclusive_write", 00:10:18.516 "zoned": false, 00:10:18.516 "supported_io_types": { 00:10:18.516 "read": true, 00:10:18.516 "write": true, 00:10:18.516 "unmap": true, 00:10:18.516 "flush": true, 00:10:18.516 "reset": true, 00:10:18.516 "nvme_admin": false, 00:10:18.516 "nvme_io": false, 00:10:18.516 "nvme_io_md": false, 00:10:18.516 "write_zeroes": true, 00:10:18.516 "zcopy": true, 00:10:18.516 "get_zone_info": false, 00:10:18.516 "zone_management": false, 00:10:18.516 "zone_append": false, 00:10:18.516 "compare": false, 00:10:18.516 "compare_and_write": false, 00:10:18.516 "abort": true, 00:10:18.516 "seek_hole": false, 00:10:18.516 "seek_data": false, 00:10:18.516 "copy": true, 00:10:18.516 "nvme_iov_md": false 00:10:18.516 }, 00:10:18.516 "memory_domains": [ 00:10:18.516 { 00:10:18.516 "dma_device_id": "system", 00:10:18.516 "dma_device_type": 1 00:10:18.516 }, 00:10:18.516 { 00:10:18.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.516 "dma_device_type": 2 00:10:18.516 } 00:10:18.516 ], 00:10:18.516 "driver_specific": {} 00:10:18.516 } 00:10:18.516 ] 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.516 "name": "Existed_Raid", 00:10:18.516 "uuid": "fe8acf88-2a63-4310-9394-36a54254394e", 00:10:18.516 "strip_size_kb": 64, 00:10:18.516 "state": "online", 00:10:18.516 "raid_level": "concat", 00:10:18.516 "superblock": true, 00:10:18.516 "num_base_bdevs": 4, 00:10:18.516 "num_base_bdevs_discovered": 4, 00:10:18.516 "num_base_bdevs_operational": 4, 00:10:18.516 "base_bdevs_list": [ 00:10:18.516 { 00:10:18.516 "name": "NewBaseBdev", 00:10:18.516 "uuid": "e7375f7b-aa68-4721-9b95-b4df1dbab422", 00:10:18.516 "is_configured": true, 00:10:18.516 "data_offset": 2048, 00:10:18.516 "data_size": 63488 00:10:18.516 }, 00:10:18.516 { 00:10:18.516 "name": "BaseBdev2", 00:10:18.516 "uuid": "b9349b01-a4d3-47f7-8080-f99bf2ca90f9", 00:10:18.516 "is_configured": true, 00:10:18.516 "data_offset": 2048, 00:10:18.516 "data_size": 63488 00:10:18.516 }, 00:10:18.516 { 00:10:18.516 "name": "BaseBdev3", 00:10:18.516 "uuid": "c20f1282-8e21-4938-92f3-99b55d5855a9", 00:10:18.516 "is_configured": true, 00:10:18.516 "data_offset": 2048, 00:10:18.516 "data_size": 63488 00:10:18.516 }, 00:10:18.516 { 00:10:18.516 "name": "BaseBdev4", 00:10:18.516 "uuid": "4d993c01-6eb3-480a-be98-0d1cf05f66c3", 00:10:18.516 "is_configured": true, 00:10:18.516 "data_offset": 2048, 00:10:18.516 "data_size": 63488 00:10:18.516 } 00:10:18.516 ] 00:10:18.516 }' 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.516 23:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.086 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.086 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.086 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.086 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.086 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.087 [2024-11-18 23:05:38.182484] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.087 "name": "Existed_Raid", 00:10:19.087 "aliases": [ 00:10:19.087 "fe8acf88-2a63-4310-9394-36a54254394e" 00:10:19.087 ], 00:10:19.087 "product_name": "Raid Volume", 00:10:19.087 "block_size": 512, 00:10:19.087 "num_blocks": 253952, 00:10:19.087 "uuid": "fe8acf88-2a63-4310-9394-36a54254394e", 00:10:19.087 "assigned_rate_limits": { 00:10:19.087 "rw_ios_per_sec": 0, 00:10:19.087 "rw_mbytes_per_sec": 0, 00:10:19.087 "r_mbytes_per_sec": 0, 00:10:19.087 "w_mbytes_per_sec": 0 00:10:19.087 }, 00:10:19.087 "claimed": false, 00:10:19.087 "zoned": false, 00:10:19.087 "supported_io_types": { 00:10:19.087 "read": true, 00:10:19.087 "write": true, 00:10:19.087 "unmap": true, 00:10:19.087 "flush": true, 00:10:19.087 "reset": true, 00:10:19.087 "nvme_admin": false, 00:10:19.087 "nvme_io": false, 00:10:19.087 "nvme_io_md": false, 00:10:19.087 "write_zeroes": true, 00:10:19.087 "zcopy": false, 00:10:19.087 "get_zone_info": false, 00:10:19.087 "zone_management": false, 00:10:19.087 "zone_append": false, 00:10:19.087 "compare": false, 00:10:19.087 "compare_and_write": false, 00:10:19.087 "abort": false, 00:10:19.087 "seek_hole": false, 00:10:19.087 "seek_data": false, 00:10:19.087 "copy": false, 00:10:19.087 "nvme_iov_md": false 00:10:19.087 }, 00:10:19.087 "memory_domains": [ 00:10:19.087 { 00:10:19.087 "dma_device_id": "system", 00:10:19.087 "dma_device_type": 1 00:10:19.087 }, 00:10:19.087 { 00:10:19.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.087 "dma_device_type": 2 00:10:19.087 }, 00:10:19.087 { 00:10:19.087 "dma_device_id": "system", 00:10:19.087 "dma_device_type": 1 00:10:19.087 }, 00:10:19.087 { 00:10:19.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.087 "dma_device_type": 2 00:10:19.087 }, 00:10:19.087 { 00:10:19.087 "dma_device_id": "system", 00:10:19.087 "dma_device_type": 1 00:10:19.087 }, 00:10:19.087 { 00:10:19.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.087 "dma_device_type": 2 00:10:19.087 }, 00:10:19.087 { 00:10:19.087 "dma_device_id": "system", 00:10:19.087 "dma_device_type": 1 00:10:19.087 }, 00:10:19.087 { 00:10:19.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.087 "dma_device_type": 2 00:10:19.087 } 00:10:19.087 ], 00:10:19.087 "driver_specific": { 00:10:19.087 "raid": { 00:10:19.087 "uuid": "fe8acf88-2a63-4310-9394-36a54254394e", 00:10:19.087 "strip_size_kb": 64, 00:10:19.087 "state": "online", 00:10:19.087 "raid_level": "concat", 00:10:19.087 "superblock": true, 00:10:19.087 "num_base_bdevs": 4, 00:10:19.087 "num_base_bdevs_discovered": 4, 00:10:19.087 "num_base_bdevs_operational": 4, 00:10:19.087 "base_bdevs_list": [ 00:10:19.087 { 00:10:19.087 "name": "NewBaseBdev", 00:10:19.087 "uuid": "e7375f7b-aa68-4721-9b95-b4df1dbab422", 00:10:19.087 "is_configured": true, 00:10:19.087 "data_offset": 2048, 00:10:19.087 "data_size": 63488 00:10:19.087 }, 00:10:19.087 { 00:10:19.087 "name": "BaseBdev2", 00:10:19.087 "uuid": "b9349b01-a4d3-47f7-8080-f99bf2ca90f9", 00:10:19.087 "is_configured": true, 00:10:19.087 "data_offset": 2048, 00:10:19.087 "data_size": 63488 00:10:19.087 }, 00:10:19.087 { 00:10:19.087 "name": "BaseBdev3", 00:10:19.087 "uuid": "c20f1282-8e21-4938-92f3-99b55d5855a9", 00:10:19.087 "is_configured": true, 00:10:19.087 "data_offset": 2048, 00:10:19.087 "data_size": 63488 00:10:19.087 }, 00:10:19.087 { 00:10:19.087 "name": "BaseBdev4", 00:10:19.087 "uuid": "4d993c01-6eb3-480a-be98-0d1cf05f66c3", 00:10:19.087 "is_configured": true, 00:10:19.087 "data_offset": 2048, 00:10:19.087 "data_size": 63488 00:10:19.087 } 00:10:19.087 ] 00:10:19.087 } 00:10:19.087 } 00:10:19.087 }' 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:19.087 BaseBdev2 00:10:19.087 BaseBdev3 00:10:19.087 BaseBdev4' 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.087 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.366 [2024-11-18 23:05:38.481641] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.366 [2024-11-18 23:05:38.481669] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.366 [2024-11-18 23:05:38.481744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.366 [2024-11-18 23:05:38.481804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.366 [2024-11-18 23:05:38.481813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82770 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82770 ']' 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82770 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82770 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82770' 00:10:19.366 killing process with pid 82770 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82770 00:10:19.366 [2024-11-18 23:05:38.532197] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.366 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82770 00:10:19.366 [2024-11-18 23:05:38.571689] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.627 23:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:19.627 00:10:19.627 real 0m9.453s 00:10:19.627 user 0m16.223s 00:10:19.627 sys 0m1.927s 00:10:19.627 ************************************ 00:10:19.627 END TEST raid_state_function_test_sb 00:10:19.627 ************************************ 00:10:19.627 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.627 23:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.627 23:05:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:19.627 23:05:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:19.627 23:05:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.627 23:05:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.627 ************************************ 00:10:19.627 START TEST raid_superblock_test 00:10:19.627 ************************************ 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:19.627 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83424 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83424 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83424 ']' 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.628 23:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.628 [2024-11-18 23:05:38.974211] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:19.628 [2024-11-18 23:05:38.974452] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83424 ] 00:10:19.888 [2024-11-18 23:05:39.132114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.888 [2024-11-18 23:05:39.176046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.888 [2024-11-18 23:05:39.218437] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.888 [2024-11-18 23:05:39.218544] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.457 malloc1 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.457 [2024-11-18 23:05:39.816965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.457 [2024-11-18 23:05:39.817074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.457 [2024-11-18 23:05:39.817131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:20.457 [2024-11-18 23:05:39.817167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.457 [2024-11-18 23:05:39.819351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.457 [2024-11-18 23:05:39.819425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.457 pt1 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.457 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.718 malloc2 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.718 [2024-11-18 23:05:39.863745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.718 [2024-11-18 23:05:39.863855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.718 [2024-11-18 23:05:39.863892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:20.718 [2024-11-18 23:05:39.863917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.718 [2024-11-18 23:05:39.868770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.718 [2024-11-18 23:05:39.868846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.718 pt2 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.718 malloc3 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.718 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.718 [2024-11-18 23:05:39.894896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.719 [2024-11-18 23:05:39.894984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.719 [2024-11-18 23:05:39.895033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:20.719 [2024-11-18 23:05:39.895059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.719 [2024-11-18 23:05:39.897122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.719 [2024-11-18 23:05:39.897191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.719 pt3 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.719 malloc4 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.719 [2024-11-18 23:05:39.927259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:20.719 [2024-11-18 23:05:39.927375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.719 [2024-11-18 23:05:39.927408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:20.719 [2024-11-18 23:05:39.927441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.719 [2024-11-18 23:05:39.929440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.719 [2024-11-18 23:05:39.929509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:20.719 pt4 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.719 [2024-11-18 23:05:39.939323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.719 [2024-11-18 23:05:39.941125] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.719 [2024-11-18 23:05:39.941231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.719 [2024-11-18 23:05:39.941320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:20.719 [2024-11-18 23:05:39.941524] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:20.719 [2024-11-18 23:05:39.941571] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.719 [2024-11-18 23:05:39.941818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:20.719 [2024-11-18 23:05:39.941994] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:20.719 [2024-11-18 23:05:39.942036] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:20.719 [2024-11-18 23:05:39.942200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.719 "name": "raid_bdev1", 00:10:20.719 "uuid": "108376f9-69d8-4bfb-8c0a-ec1181edd56f", 00:10:20.719 "strip_size_kb": 64, 00:10:20.719 "state": "online", 00:10:20.719 "raid_level": "concat", 00:10:20.719 "superblock": true, 00:10:20.719 "num_base_bdevs": 4, 00:10:20.719 "num_base_bdevs_discovered": 4, 00:10:20.719 "num_base_bdevs_operational": 4, 00:10:20.719 "base_bdevs_list": [ 00:10:20.719 { 00:10:20.719 "name": "pt1", 00:10:20.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.719 "is_configured": true, 00:10:20.719 "data_offset": 2048, 00:10:20.719 "data_size": 63488 00:10:20.719 }, 00:10:20.719 { 00:10:20.719 "name": "pt2", 00:10:20.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.719 "is_configured": true, 00:10:20.719 "data_offset": 2048, 00:10:20.719 "data_size": 63488 00:10:20.719 }, 00:10:20.719 { 00:10:20.719 "name": "pt3", 00:10:20.719 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.719 "is_configured": true, 00:10:20.719 "data_offset": 2048, 00:10:20.719 "data_size": 63488 00:10:20.719 }, 00:10:20.719 { 00:10:20.719 "name": "pt4", 00:10:20.719 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.719 "is_configured": true, 00:10:20.719 "data_offset": 2048, 00:10:20.719 "data_size": 63488 00:10:20.719 } 00:10:20.719 ] 00:10:20.719 }' 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.719 23:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.290 [2024-11-18 23:05:40.386758] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.290 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.290 "name": "raid_bdev1", 00:10:21.290 "aliases": [ 00:10:21.290 "108376f9-69d8-4bfb-8c0a-ec1181edd56f" 00:10:21.290 ], 00:10:21.290 "product_name": "Raid Volume", 00:10:21.290 "block_size": 512, 00:10:21.290 "num_blocks": 253952, 00:10:21.290 "uuid": "108376f9-69d8-4bfb-8c0a-ec1181edd56f", 00:10:21.290 "assigned_rate_limits": { 00:10:21.290 "rw_ios_per_sec": 0, 00:10:21.290 "rw_mbytes_per_sec": 0, 00:10:21.290 "r_mbytes_per_sec": 0, 00:10:21.290 "w_mbytes_per_sec": 0 00:10:21.290 }, 00:10:21.290 "claimed": false, 00:10:21.290 "zoned": false, 00:10:21.290 "supported_io_types": { 00:10:21.290 "read": true, 00:10:21.290 "write": true, 00:10:21.290 "unmap": true, 00:10:21.290 "flush": true, 00:10:21.291 "reset": true, 00:10:21.291 "nvme_admin": false, 00:10:21.291 "nvme_io": false, 00:10:21.291 "nvme_io_md": false, 00:10:21.291 "write_zeroes": true, 00:10:21.291 "zcopy": false, 00:10:21.291 "get_zone_info": false, 00:10:21.291 "zone_management": false, 00:10:21.291 "zone_append": false, 00:10:21.291 "compare": false, 00:10:21.291 "compare_and_write": false, 00:10:21.291 "abort": false, 00:10:21.291 "seek_hole": false, 00:10:21.291 "seek_data": false, 00:10:21.291 "copy": false, 00:10:21.291 "nvme_iov_md": false 00:10:21.291 }, 00:10:21.291 "memory_domains": [ 00:10:21.291 { 00:10:21.291 "dma_device_id": "system", 00:10:21.291 "dma_device_type": 1 00:10:21.291 }, 00:10:21.291 { 00:10:21.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.291 "dma_device_type": 2 00:10:21.291 }, 00:10:21.291 { 00:10:21.291 "dma_device_id": "system", 00:10:21.291 "dma_device_type": 1 00:10:21.291 }, 00:10:21.291 { 00:10:21.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.291 "dma_device_type": 2 00:10:21.291 }, 00:10:21.291 { 00:10:21.291 "dma_device_id": "system", 00:10:21.291 "dma_device_type": 1 00:10:21.291 }, 00:10:21.291 { 00:10:21.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.291 "dma_device_type": 2 00:10:21.291 }, 00:10:21.291 { 00:10:21.291 "dma_device_id": "system", 00:10:21.291 "dma_device_type": 1 00:10:21.291 }, 00:10:21.291 { 00:10:21.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.291 "dma_device_type": 2 00:10:21.291 } 00:10:21.291 ], 00:10:21.291 "driver_specific": { 00:10:21.291 "raid": { 00:10:21.291 "uuid": "108376f9-69d8-4bfb-8c0a-ec1181edd56f", 00:10:21.291 "strip_size_kb": 64, 00:10:21.291 "state": "online", 00:10:21.291 "raid_level": "concat", 00:10:21.291 "superblock": true, 00:10:21.291 "num_base_bdevs": 4, 00:10:21.291 "num_base_bdevs_discovered": 4, 00:10:21.291 "num_base_bdevs_operational": 4, 00:10:21.291 "base_bdevs_list": [ 00:10:21.291 { 00:10:21.291 "name": "pt1", 00:10:21.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.291 "is_configured": true, 00:10:21.291 "data_offset": 2048, 00:10:21.291 "data_size": 63488 00:10:21.291 }, 00:10:21.292 { 00:10:21.292 "name": "pt2", 00:10:21.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.292 "is_configured": true, 00:10:21.292 "data_offset": 2048, 00:10:21.292 "data_size": 63488 00:10:21.292 }, 00:10:21.292 { 00:10:21.292 "name": "pt3", 00:10:21.292 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.292 "is_configured": true, 00:10:21.292 "data_offset": 2048, 00:10:21.292 "data_size": 63488 00:10:21.292 }, 00:10:21.292 { 00:10:21.292 "name": "pt4", 00:10:21.292 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.292 "is_configured": true, 00:10:21.292 "data_offset": 2048, 00:10:21.292 "data_size": 63488 00:10:21.292 } 00:10:21.292 ] 00:10:21.292 } 00:10:21.292 } 00:10:21.292 }' 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:21.292 pt2 00:10:21.292 pt3 00:10:21.292 pt4' 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.292 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.293 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 [2024-11-18 23:05:40.706180] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=108376f9-69d8-4bfb-8c0a-ec1181edd56f 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 108376f9-69d8-4bfb-8c0a-ec1181edd56f ']' 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 [2024-11-18 23:05:40.749823] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.555 [2024-11-18 23:05:40.749857] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.555 [2024-11-18 23:05:40.749929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.555 [2024-11-18 23:05:40.750002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.555 [2024-11-18 23:05:40.750012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 [2024-11-18 23:05:40.909584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:21.555 [2024-11-18 23:05:40.911378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:21.555 [2024-11-18 23:05:40.911420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:21.555 [2024-11-18 23:05:40.911447] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:21.555 [2024-11-18 23:05:40.911487] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:21.555 [2024-11-18 23:05:40.911542] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:21.555 [2024-11-18 23:05:40.911564] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:21.555 [2024-11-18 23:05:40.911579] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:21.555 [2024-11-18 23:05:40.911594] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.555 [2024-11-18 23:05:40.911603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:21.555 request: 00:10:21.555 { 00:10:21.555 "name": "raid_bdev1", 00:10:21.555 "raid_level": "concat", 00:10:21.555 "base_bdevs": [ 00:10:21.555 "malloc1", 00:10:21.555 "malloc2", 00:10:21.555 "malloc3", 00:10:21.555 "malloc4" 00:10:21.555 ], 00:10:21.555 "strip_size_kb": 64, 00:10:21.555 "superblock": false, 00:10:21.555 "method": "bdev_raid_create", 00:10:21.555 "req_id": 1 00:10:21.555 } 00:10:21.555 Got JSON-RPC error response 00:10:21.555 response: 00:10:21.555 { 00:10:21.555 "code": -17, 00:10:21.555 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:21.555 } 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.815 [2024-11-18 23:05:40.969433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.815 [2024-11-18 23:05:40.969518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.815 [2024-11-18 23:05:40.969555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:21.815 [2024-11-18 23:05:40.969582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.815 [2024-11-18 23:05:40.971764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.815 [2024-11-18 23:05:40.971835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.815 [2024-11-18 23:05:40.971920] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.815 [2024-11-18 23:05:40.971992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.815 pt1 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.815 23:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.816 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.816 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.816 "name": "raid_bdev1", 00:10:21.816 "uuid": "108376f9-69d8-4bfb-8c0a-ec1181edd56f", 00:10:21.816 "strip_size_kb": 64, 00:10:21.816 "state": "configuring", 00:10:21.816 "raid_level": "concat", 00:10:21.816 "superblock": true, 00:10:21.816 "num_base_bdevs": 4, 00:10:21.816 "num_base_bdevs_discovered": 1, 00:10:21.816 "num_base_bdevs_operational": 4, 00:10:21.816 "base_bdevs_list": [ 00:10:21.816 { 00:10:21.816 "name": "pt1", 00:10:21.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.816 "is_configured": true, 00:10:21.816 "data_offset": 2048, 00:10:21.816 "data_size": 63488 00:10:21.816 }, 00:10:21.816 { 00:10:21.816 "name": null, 00:10:21.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.816 "is_configured": false, 00:10:21.816 "data_offset": 2048, 00:10:21.816 "data_size": 63488 00:10:21.816 }, 00:10:21.816 { 00:10:21.816 "name": null, 00:10:21.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.816 "is_configured": false, 00:10:21.816 "data_offset": 2048, 00:10:21.816 "data_size": 63488 00:10:21.816 }, 00:10:21.816 { 00:10:21.816 "name": null, 00:10:21.816 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.816 "is_configured": false, 00:10:21.816 "data_offset": 2048, 00:10:21.816 "data_size": 63488 00:10:21.816 } 00:10:21.816 ] 00:10:21.816 }' 00:10:21.816 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.816 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.082 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:22.082 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.082 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.082 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.082 [2024-11-18 23:05:41.404690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.082 [2024-11-18 23:05:41.404783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.082 [2024-11-18 23:05:41.404806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:22.082 [2024-11-18 23:05:41.404816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.082 [2024-11-18 23:05:41.405201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.082 [2024-11-18 23:05:41.405218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.082 [2024-11-18 23:05:41.405285] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:22.082 [2024-11-18 23:05:41.405317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.082 pt2 00:10:22.082 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.082 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.083 [2024-11-18 23:05:41.416679] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.083 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.351 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.351 "name": "raid_bdev1", 00:10:22.351 "uuid": "108376f9-69d8-4bfb-8c0a-ec1181edd56f", 00:10:22.351 "strip_size_kb": 64, 00:10:22.351 "state": "configuring", 00:10:22.351 "raid_level": "concat", 00:10:22.351 "superblock": true, 00:10:22.351 "num_base_bdevs": 4, 00:10:22.351 "num_base_bdevs_discovered": 1, 00:10:22.351 "num_base_bdevs_operational": 4, 00:10:22.351 "base_bdevs_list": [ 00:10:22.351 { 00:10:22.351 "name": "pt1", 00:10:22.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.351 "is_configured": true, 00:10:22.351 "data_offset": 2048, 00:10:22.351 "data_size": 63488 00:10:22.351 }, 00:10:22.351 { 00:10:22.351 "name": null, 00:10:22.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.351 "is_configured": false, 00:10:22.351 "data_offset": 0, 00:10:22.351 "data_size": 63488 00:10:22.351 }, 00:10:22.351 { 00:10:22.351 "name": null, 00:10:22.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.351 "is_configured": false, 00:10:22.351 "data_offset": 2048, 00:10:22.351 "data_size": 63488 00:10:22.351 }, 00:10:22.351 { 00:10:22.351 "name": null, 00:10:22.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.351 "is_configured": false, 00:10:22.351 "data_offset": 2048, 00:10:22.351 "data_size": 63488 00:10:22.351 } 00:10:22.351 ] 00:10:22.351 }' 00:10:22.351 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.351 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.612 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:22.612 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.612 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.612 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.612 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.612 [2024-11-18 23:05:41.839940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.612 [2024-11-18 23:05:41.840042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.612 [2024-11-18 23:05:41.840074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:22.612 [2024-11-18 23:05:41.840103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.612 [2024-11-18 23:05:41.840516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.612 [2024-11-18 23:05:41.840575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.612 [2024-11-18 23:05:41.840665] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:22.612 [2024-11-18 23:05:41.840715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.612 pt2 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 [2024-11-18 23:05:41.851896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.613 [2024-11-18 23:05:41.851984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.613 [2024-11-18 23:05:41.852016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:22.613 [2024-11-18 23:05:41.852044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.613 [2024-11-18 23:05:41.852401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.613 [2024-11-18 23:05:41.852459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.613 [2024-11-18 23:05:41.852539] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:22.613 [2024-11-18 23:05:41.852564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.613 pt3 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 [2024-11-18 23:05:41.863887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:22.613 [2024-11-18 23:05:41.863975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.613 [2024-11-18 23:05:41.864009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:22.613 [2024-11-18 23:05:41.864038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.613 [2024-11-18 23:05:41.864419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.613 [2024-11-18 23:05:41.864474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:22.613 [2024-11-18 23:05:41.864551] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:22.613 [2024-11-18 23:05:41.864597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:22.613 [2024-11-18 23:05:41.864715] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:22.613 [2024-11-18 23:05:41.864757] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:22.613 [2024-11-18 23:05:41.865012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:22.613 [2024-11-18 23:05:41.865166] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:22.613 [2024-11-18 23:05:41.865207] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:22.613 [2024-11-18 23:05:41.865355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.613 pt4 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.613 "name": "raid_bdev1", 00:10:22.613 "uuid": "108376f9-69d8-4bfb-8c0a-ec1181edd56f", 00:10:22.613 "strip_size_kb": 64, 00:10:22.613 "state": "online", 00:10:22.613 "raid_level": "concat", 00:10:22.613 "superblock": true, 00:10:22.613 "num_base_bdevs": 4, 00:10:22.613 "num_base_bdevs_discovered": 4, 00:10:22.613 "num_base_bdevs_operational": 4, 00:10:22.613 "base_bdevs_list": [ 00:10:22.613 { 00:10:22.613 "name": "pt1", 00:10:22.613 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.613 "is_configured": true, 00:10:22.613 "data_offset": 2048, 00:10:22.613 "data_size": 63488 00:10:22.613 }, 00:10:22.613 { 00:10:22.613 "name": "pt2", 00:10:22.613 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.613 "is_configured": true, 00:10:22.613 "data_offset": 2048, 00:10:22.613 "data_size": 63488 00:10:22.613 }, 00:10:22.613 { 00:10:22.613 "name": "pt3", 00:10:22.613 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.613 "is_configured": true, 00:10:22.613 "data_offset": 2048, 00:10:22.613 "data_size": 63488 00:10:22.613 }, 00:10:22.613 { 00:10:22.613 "name": "pt4", 00:10:22.613 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.613 "is_configured": true, 00:10:22.613 "data_offset": 2048, 00:10:22.613 "data_size": 63488 00:10:22.613 } 00:10:22.613 ] 00:10:22.613 }' 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.613 23:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.183 [2024-11-18 23:05:42.299482] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.183 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.183 "name": "raid_bdev1", 00:10:23.183 "aliases": [ 00:10:23.183 "108376f9-69d8-4bfb-8c0a-ec1181edd56f" 00:10:23.183 ], 00:10:23.183 "product_name": "Raid Volume", 00:10:23.183 "block_size": 512, 00:10:23.183 "num_blocks": 253952, 00:10:23.183 "uuid": "108376f9-69d8-4bfb-8c0a-ec1181edd56f", 00:10:23.183 "assigned_rate_limits": { 00:10:23.183 "rw_ios_per_sec": 0, 00:10:23.183 "rw_mbytes_per_sec": 0, 00:10:23.183 "r_mbytes_per_sec": 0, 00:10:23.183 "w_mbytes_per_sec": 0 00:10:23.183 }, 00:10:23.183 "claimed": false, 00:10:23.183 "zoned": false, 00:10:23.183 "supported_io_types": { 00:10:23.183 "read": true, 00:10:23.183 "write": true, 00:10:23.183 "unmap": true, 00:10:23.183 "flush": true, 00:10:23.183 "reset": true, 00:10:23.183 "nvme_admin": false, 00:10:23.183 "nvme_io": false, 00:10:23.183 "nvme_io_md": false, 00:10:23.183 "write_zeroes": true, 00:10:23.183 "zcopy": false, 00:10:23.184 "get_zone_info": false, 00:10:23.184 "zone_management": false, 00:10:23.184 "zone_append": false, 00:10:23.184 "compare": false, 00:10:23.184 "compare_and_write": false, 00:10:23.184 "abort": false, 00:10:23.184 "seek_hole": false, 00:10:23.184 "seek_data": false, 00:10:23.184 "copy": false, 00:10:23.184 "nvme_iov_md": false 00:10:23.184 }, 00:10:23.184 "memory_domains": [ 00:10:23.184 { 00:10:23.184 "dma_device_id": "system", 00:10:23.184 "dma_device_type": 1 00:10:23.184 }, 00:10:23.184 { 00:10:23.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.184 "dma_device_type": 2 00:10:23.184 }, 00:10:23.184 { 00:10:23.184 "dma_device_id": "system", 00:10:23.184 "dma_device_type": 1 00:10:23.184 }, 00:10:23.184 { 00:10:23.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.184 "dma_device_type": 2 00:10:23.184 }, 00:10:23.184 { 00:10:23.184 "dma_device_id": "system", 00:10:23.184 "dma_device_type": 1 00:10:23.184 }, 00:10:23.184 { 00:10:23.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.184 "dma_device_type": 2 00:10:23.184 }, 00:10:23.184 { 00:10:23.184 "dma_device_id": "system", 00:10:23.184 "dma_device_type": 1 00:10:23.184 }, 00:10:23.184 { 00:10:23.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.184 "dma_device_type": 2 00:10:23.184 } 00:10:23.184 ], 00:10:23.184 "driver_specific": { 00:10:23.184 "raid": { 00:10:23.184 "uuid": "108376f9-69d8-4bfb-8c0a-ec1181edd56f", 00:10:23.184 "strip_size_kb": 64, 00:10:23.184 "state": "online", 00:10:23.184 "raid_level": "concat", 00:10:23.184 "superblock": true, 00:10:23.184 "num_base_bdevs": 4, 00:10:23.184 "num_base_bdevs_discovered": 4, 00:10:23.184 "num_base_bdevs_operational": 4, 00:10:23.184 "base_bdevs_list": [ 00:10:23.184 { 00:10:23.184 "name": "pt1", 00:10:23.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.184 "is_configured": true, 00:10:23.184 "data_offset": 2048, 00:10:23.184 "data_size": 63488 00:10:23.184 }, 00:10:23.184 { 00:10:23.184 "name": "pt2", 00:10:23.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.184 "is_configured": true, 00:10:23.184 "data_offset": 2048, 00:10:23.184 "data_size": 63488 00:10:23.184 }, 00:10:23.184 { 00:10:23.184 "name": "pt3", 00:10:23.184 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.184 "is_configured": true, 00:10:23.184 "data_offset": 2048, 00:10:23.184 "data_size": 63488 00:10:23.184 }, 00:10:23.184 { 00:10:23.184 "name": "pt4", 00:10:23.184 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:23.184 "is_configured": true, 00:10:23.184 "data_offset": 2048, 00:10:23.184 "data_size": 63488 00:10:23.184 } 00:10:23.184 ] 00:10:23.184 } 00:10:23.184 } 00:10:23.184 }' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:23.184 pt2 00:10:23.184 pt3 00:10:23.184 pt4' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.184 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.444 [2024-11-18 23:05:42.594924] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 108376f9-69d8-4bfb-8c0a-ec1181edd56f '!=' 108376f9-69d8-4bfb-8c0a-ec1181edd56f ']' 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83424 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83424 ']' 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83424 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:23.444 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.445 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83424 00:10:23.445 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:23.445 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:23.445 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83424' 00:10:23.445 killing process with pid 83424 00:10:23.445 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83424 00:10:23.445 [2024-11-18 23:05:42.665724] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.445 [2024-11-18 23:05:42.665807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.445 [2024-11-18 23:05:42.665872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.445 [2024-11-18 23:05:42.665882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:23.445 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83424 00:10:23.445 [2024-11-18 23:05:42.708186] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.704 ************************************ 00:10:23.704 END TEST raid_superblock_test 00:10:23.704 ************************************ 00:10:23.704 23:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:23.704 00:10:23.704 real 0m4.051s 00:10:23.704 user 0m6.383s 00:10:23.704 sys 0m0.901s 00:10:23.704 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.704 23:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.704 23:05:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:23.704 23:05:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:23.704 23:05:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.704 23:05:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.704 ************************************ 00:10:23.704 START TEST raid_read_error_test 00:10:23.704 ************************************ 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.704 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UkAIUMUm6w 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83672 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83672 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83672 ']' 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.705 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.965 [2024-11-18 23:05:43.116583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:23.965 [2024-11-18 23:05:43.116800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83672 ] 00:10:23.965 [2024-11-18 23:05:43.275370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.965 [2024-11-18 23:05:43.319551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.225 [2024-11-18 23:05:43.361910] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.225 [2024-11-18 23:05:43.362020] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.796 BaseBdev1_malloc 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.796 true 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.796 [2024-11-18 23:05:43.967988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:24.796 [2024-11-18 23:05:43.968040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.796 [2024-11-18 23:05:43.968059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:24.796 [2024-11-18 23:05:43.968067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.796 [2024-11-18 23:05:43.970147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.796 [2024-11-18 23:05:43.970186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:24.796 BaseBdev1 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.796 23:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.796 BaseBdev2_malloc 00:10:24.796 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.796 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:24.796 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.796 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.796 true 00:10:24.796 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.796 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:24.796 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.796 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.796 [2024-11-18 23:05:44.024666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:24.797 [2024-11-18 23:05:44.024733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.797 [2024-11-18 23:05:44.024760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:24.797 [2024-11-18 23:05:44.024773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.797 [2024-11-18 23:05:44.027664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.797 [2024-11-18 23:05:44.027704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:24.797 BaseBdev2 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.797 BaseBdev3_malloc 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.797 true 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.797 [2024-11-18 23:05:44.065114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:24.797 [2024-11-18 23:05:44.065211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.797 [2024-11-18 23:05:44.065246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:24.797 [2024-11-18 23:05:44.065273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.797 [2024-11-18 23:05:44.067293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.797 [2024-11-18 23:05:44.067369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:24.797 BaseBdev3 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.797 BaseBdev4_malloc 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.797 true 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.797 [2024-11-18 23:05:44.105579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:24.797 [2024-11-18 23:05:44.105621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.797 [2024-11-18 23:05:44.105657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:24.797 [2024-11-18 23:05:44.105665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.797 [2024-11-18 23:05:44.107620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.797 [2024-11-18 23:05:44.107653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:24.797 BaseBdev4 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.797 [2024-11-18 23:05:44.117610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.797 [2024-11-18 23:05:44.119397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.797 [2024-11-18 23:05:44.119482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.797 [2024-11-18 23:05:44.119533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:24.797 [2024-11-18 23:05:44.119727] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:24.797 [2024-11-18 23:05:44.119739] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.797 [2024-11-18 23:05:44.119958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:24.797 [2024-11-18 23:05:44.120092] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:24.797 [2024-11-18 23:05:44.120104] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:24.797 [2024-11-18 23:05:44.120227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.797 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.057 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.057 "name": "raid_bdev1", 00:10:25.057 "uuid": "ae06b9a8-dbcb-4428-90d5-8c0ff5b68ab7", 00:10:25.057 "strip_size_kb": 64, 00:10:25.057 "state": "online", 00:10:25.057 "raid_level": "concat", 00:10:25.057 "superblock": true, 00:10:25.057 "num_base_bdevs": 4, 00:10:25.057 "num_base_bdevs_discovered": 4, 00:10:25.057 "num_base_bdevs_operational": 4, 00:10:25.057 "base_bdevs_list": [ 00:10:25.057 { 00:10:25.057 "name": "BaseBdev1", 00:10:25.057 "uuid": "0eccedcd-8ee0-5855-9d5a-ebfbafc78044", 00:10:25.057 "is_configured": true, 00:10:25.057 "data_offset": 2048, 00:10:25.057 "data_size": 63488 00:10:25.057 }, 00:10:25.057 { 00:10:25.057 "name": "BaseBdev2", 00:10:25.057 "uuid": "2a3562d0-8346-5ec9-8068-99bf8c0c5e9a", 00:10:25.057 "is_configured": true, 00:10:25.057 "data_offset": 2048, 00:10:25.057 "data_size": 63488 00:10:25.057 }, 00:10:25.057 { 00:10:25.057 "name": "BaseBdev3", 00:10:25.057 "uuid": "97b3ba17-bb4c-5b9c-802f-ebf1efeba13e", 00:10:25.057 "is_configured": true, 00:10:25.057 "data_offset": 2048, 00:10:25.057 "data_size": 63488 00:10:25.057 }, 00:10:25.057 { 00:10:25.057 "name": "BaseBdev4", 00:10:25.057 "uuid": "8b789b92-4dfd-5329-91ac-619dacf7532b", 00:10:25.057 "is_configured": true, 00:10:25.057 "data_offset": 2048, 00:10:25.057 "data_size": 63488 00:10:25.057 } 00:10:25.057 ] 00:10:25.057 }' 00:10:25.057 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.057 23:05:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.317 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:25.317 23:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:25.317 [2024-11-18 23:05:44.605091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.256 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.256 "name": "raid_bdev1", 00:10:26.256 "uuid": "ae06b9a8-dbcb-4428-90d5-8c0ff5b68ab7", 00:10:26.256 "strip_size_kb": 64, 00:10:26.256 "state": "online", 00:10:26.256 "raid_level": "concat", 00:10:26.256 "superblock": true, 00:10:26.256 "num_base_bdevs": 4, 00:10:26.256 "num_base_bdevs_discovered": 4, 00:10:26.256 "num_base_bdevs_operational": 4, 00:10:26.256 "base_bdevs_list": [ 00:10:26.256 { 00:10:26.256 "name": "BaseBdev1", 00:10:26.256 "uuid": "0eccedcd-8ee0-5855-9d5a-ebfbafc78044", 00:10:26.256 "is_configured": true, 00:10:26.256 "data_offset": 2048, 00:10:26.256 "data_size": 63488 00:10:26.256 }, 00:10:26.257 { 00:10:26.257 "name": "BaseBdev2", 00:10:26.257 "uuid": "2a3562d0-8346-5ec9-8068-99bf8c0c5e9a", 00:10:26.257 "is_configured": true, 00:10:26.257 "data_offset": 2048, 00:10:26.257 "data_size": 63488 00:10:26.257 }, 00:10:26.257 { 00:10:26.257 "name": "BaseBdev3", 00:10:26.257 "uuid": "97b3ba17-bb4c-5b9c-802f-ebf1efeba13e", 00:10:26.257 "is_configured": true, 00:10:26.257 "data_offset": 2048, 00:10:26.257 "data_size": 63488 00:10:26.257 }, 00:10:26.257 { 00:10:26.257 "name": "BaseBdev4", 00:10:26.257 "uuid": "8b789b92-4dfd-5329-91ac-619dacf7532b", 00:10:26.257 "is_configured": true, 00:10:26.257 "data_offset": 2048, 00:10:26.257 "data_size": 63488 00:10:26.257 } 00:10:26.257 ] 00:10:26.257 }' 00:10:26.257 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.257 23:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.827 23:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.827 23:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.827 23:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.827 [2024-11-18 23:05:45.997054] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.827 [2024-11-18 23:05:45.997136] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.827 [2024-11-18 23:05:45.999827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.827 { 00:10:26.827 "results": [ 00:10:26.827 { 00:10:26.827 "job": "raid_bdev1", 00:10:26.827 "core_mask": "0x1", 00:10:26.827 "workload": "randrw", 00:10:26.827 "percentage": 50, 00:10:26.827 "status": "finished", 00:10:26.827 "queue_depth": 1, 00:10:26.827 "io_size": 131072, 00:10:26.827 "runtime": 1.392754, 00:10:26.827 "iops": 17157.37308957648, 00:10:26.827 "mibps": 2144.67163619706, 00:10:26.827 "io_failed": 1, 00:10:26.827 "io_timeout": 0, 00:10:26.827 "avg_latency_us": 80.92354842370267, 00:10:26.827 "min_latency_us": 24.370305676855896, 00:10:26.827 "max_latency_us": 1352.216593886463 00:10:26.827 } 00:10:26.827 ], 00:10:26.827 "core_count": 1 00:10:26.827 } 00:10:26.827 [2024-11-18 23:05:45.999929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.827 [2024-11-18 23:05:45.999984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.827 [2024-11-18 23:05:46.000000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83672 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83672 ']' 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83672 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83672 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:26.827 killing process with pid 83672 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83672' 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83672 00:10:26.827 [2024-11-18 23:05:46.031170] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.827 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83672 00:10:26.827 [2024-11-18 23:05:46.065442] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.087 23:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UkAIUMUm6w 00:10:27.087 23:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:27.087 23:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:27.087 ************************************ 00:10:27.087 END TEST raid_read_error_test 00:10:27.087 ************************************ 00:10:27.087 23:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:27.087 23:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:27.087 23:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.087 23:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.087 23:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:27.087 00:10:27.087 real 0m3.294s 00:10:27.087 user 0m4.067s 00:10:27.087 sys 0m0.573s 00:10:27.087 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.088 23:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.088 23:05:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:27.088 23:05:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:27.088 23:05:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.088 23:05:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.088 ************************************ 00:10:27.088 START TEST raid_write_error_test 00:10:27.088 ************************************ 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TC1Bl8p6ca 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83801 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83801 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83801 ']' 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.088 23:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.348 [2024-11-18 23:05:46.487645] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:27.348 [2024-11-18 23:05:46.487851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83801 ] 00:10:27.348 [2024-11-18 23:05:46.646797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.348 [2024-11-18 23:05:46.690738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.608 [2024-11-18 23:05:46.732817] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.608 [2024-11-18 23:05:46.732928] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.178 BaseBdev1_malloc 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.178 true 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.178 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 [2024-11-18 23:05:47.338779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:28.179 [2024-11-18 23:05:47.338837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.179 [2024-11-18 23:05:47.338877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:28.179 [2024-11-18 23:05:47.338887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.179 [2024-11-18 23:05:47.340933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.179 [2024-11-18 23:05:47.341045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:28.179 BaseBdev1 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 BaseBdev2_malloc 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 true 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 [2024-11-18 23:05:47.393047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:28.179 [2024-11-18 23:05:47.393115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.179 [2024-11-18 23:05:47.393143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:28.179 [2024-11-18 23:05:47.393158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.179 [2024-11-18 23:05:47.395789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.179 [2024-11-18 23:05:47.395827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:28.179 BaseBdev2 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 BaseBdev3_malloc 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 true 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 [2024-11-18 23:05:47.433649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:28.179 [2024-11-18 23:05:47.433690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.179 [2024-11-18 23:05:47.433722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:28.179 [2024-11-18 23:05:47.433730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.179 [2024-11-18 23:05:47.435686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.179 [2024-11-18 23:05:47.435720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:28.179 BaseBdev3 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 BaseBdev4_malloc 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 true 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 [2024-11-18 23:05:47.473907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:28.179 [2024-11-18 23:05:47.473948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.179 [2024-11-18 23:05:47.473983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:28.179 [2024-11-18 23:05:47.473991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.179 [2024-11-18 23:05:47.475957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.179 [2024-11-18 23:05:47.475992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:28.179 BaseBdev4 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 [2024-11-18 23:05:47.485931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.179 [2024-11-18 23:05:47.487710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.179 [2024-11-18 23:05:47.487792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.179 [2024-11-18 23:05:47.487843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.179 [2024-11-18 23:05:47.488036] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:28.179 [2024-11-18 23:05:47.488047] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:28.179 [2024-11-18 23:05:47.488287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:28.179 [2024-11-18 23:05:47.488433] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:28.179 [2024-11-18 23:05:47.488446] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:28.179 [2024-11-18 23:05:47.488549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.179 "name": "raid_bdev1", 00:10:28.180 "uuid": "9577fcc0-b4c3-4c87-b889-1dbc72a3a1d2", 00:10:28.180 "strip_size_kb": 64, 00:10:28.180 "state": "online", 00:10:28.180 "raid_level": "concat", 00:10:28.180 "superblock": true, 00:10:28.180 "num_base_bdevs": 4, 00:10:28.180 "num_base_bdevs_discovered": 4, 00:10:28.180 "num_base_bdevs_operational": 4, 00:10:28.180 "base_bdevs_list": [ 00:10:28.180 { 00:10:28.180 "name": "BaseBdev1", 00:10:28.180 "uuid": "23db4084-82f4-5063-86f4-92dd5fe0e044", 00:10:28.180 "is_configured": true, 00:10:28.180 "data_offset": 2048, 00:10:28.180 "data_size": 63488 00:10:28.180 }, 00:10:28.180 { 00:10:28.180 "name": "BaseBdev2", 00:10:28.180 "uuid": "e8982335-1fe2-5790-8c23-84bf7f201f50", 00:10:28.180 "is_configured": true, 00:10:28.180 "data_offset": 2048, 00:10:28.180 "data_size": 63488 00:10:28.180 }, 00:10:28.180 { 00:10:28.180 "name": "BaseBdev3", 00:10:28.180 "uuid": "0f407cc6-f1d8-5272-a606-bd32caa40e4d", 00:10:28.180 "is_configured": true, 00:10:28.180 "data_offset": 2048, 00:10:28.180 "data_size": 63488 00:10:28.180 }, 00:10:28.180 { 00:10:28.180 "name": "BaseBdev4", 00:10:28.180 "uuid": "2dbdf672-13d5-5c28-a9f0-3f64e3182093", 00:10:28.180 "is_configured": true, 00:10:28.180 "data_offset": 2048, 00:10:28.180 "data_size": 63488 00:10:28.180 } 00:10:28.180 ] 00:10:28.180 }' 00:10:28.180 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.180 23:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.748 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:28.748 23:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:28.748 [2024-11-18 23:05:48.061336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.688 23:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.688 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.688 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.688 "name": "raid_bdev1", 00:10:29.688 "uuid": "9577fcc0-b4c3-4c87-b889-1dbc72a3a1d2", 00:10:29.688 "strip_size_kb": 64, 00:10:29.688 "state": "online", 00:10:29.688 "raid_level": "concat", 00:10:29.688 "superblock": true, 00:10:29.688 "num_base_bdevs": 4, 00:10:29.688 "num_base_bdevs_discovered": 4, 00:10:29.688 "num_base_bdevs_operational": 4, 00:10:29.688 "base_bdevs_list": [ 00:10:29.688 { 00:10:29.688 "name": "BaseBdev1", 00:10:29.688 "uuid": "23db4084-82f4-5063-86f4-92dd5fe0e044", 00:10:29.688 "is_configured": true, 00:10:29.688 "data_offset": 2048, 00:10:29.688 "data_size": 63488 00:10:29.688 }, 00:10:29.688 { 00:10:29.688 "name": "BaseBdev2", 00:10:29.688 "uuid": "e8982335-1fe2-5790-8c23-84bf7f201f50", 00:10:29.688 "is_configured": true, 00:10:29.688 "data_offset": 2048, 00:10:29.688 "data_size": 63488 00:10:29.688 }, 00:10:29.688 { 00:10:29.688 "name": "BaseBdev3", 00:10:29.688 "uuid": "0f407cc6-f1d8-5272-a606-bd32caa40e4d", 00:10:29.688 "is_configured": true, 00:10:29.688 "data_offset": 2048, 00:10:29.688 "data_size": 63488 00:10:29.688 }, 00:10:29.688 { 00:10:29.688 "name": "BaseBdev4", 00:10:29.688 "uuid": "2dbdf672-13d5-5c28-a9f0-3f64e3182093", 00:10:29.688 "is_configured": true, 00:10:29.688 "data_offset": 2048, 00:10:29.688 "data_size": 63488 00:10:29.688 } 00:10:29.688 ] 00:10:29.688 }' 00:10:29.688 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.688 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.256 [2024-11-18 23:05:49.441340] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.256 [2024-11-18 23:05:49.441372] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.256 [2024-11-18 23:05:49.443919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.256 [2024-11-18 23:05:49.444016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.256 [2024-11-18 23:05:49.444095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.256 [2024-11-18 23:05:49.444140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:30.256 { 00:10:30.256 "results": [ 00:10:30.256 { 00:10:30.256 "job": "raid_bdev1", 00:10:30.256 "core_mask": "0x1", 00:10:30.256 "workload": "randrw", 00:10:30.256 "percentage": 50, 00:10:30.256 "status": "finished", 00:10:30.256 "queue_depth": 1, 00:10:30.256 "io_size": 131072, 00:10:30.256 "runtime": 1.38073, 00:10:30.256 "iops": 17245.949606367645, 00:10:30.256 "mibps": 2155.7437007959556, 00:10:30.256 "io_failed": 1, 00:10:30.256 "io_timeout": 0, 00:10:30.256 "avg_latency_us": 80.42876084895099, 00:10:30.256 "min_latency_us": 24.705676855895195, 00:10:30.256 "max_latency_us": 1452.380786026201 00:10:30.256 } 00:10:30.256 ], 00:10:30.256 "core_count": 1 00:10:30.256 } 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83801 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83801 ']' 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83801 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83801 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:30.256 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:30.257 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83801' 00:10:30.257 killing process with pid 83801 00:10:30.257 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83801 00:10:30.257 [2024-11-18 23:05:49.488261] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.257 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83801 00:10:30.257 [2024-11-18 23:05:49.522230] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.517 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TC1Bl8p6ca 00:10:30.517 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:30.517 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:30.517 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:30.517 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:30.517 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.517 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.517 23:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:30.517 00:10:30.517 real 0m3.384s 00:10:30.517 user 0m4.248s 00:10:30.517 sys 0m0.584s 00:10:30.517 ************************************ 00:10:30.517 END TEST raid_write_error_test 00:10:30.517 ************************************ 00:10:30.517 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.517 23:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.517 23:05:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:30.517 23:05:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:30.517 23:05:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:30.517 23:05:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.517 23:05:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.517 ************************************ 00:10:30.517 START TEST raid_state_function_test 00:10:30.517 ************************************ 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:30.517 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83928 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83928' 00:10:30.518 Process raid pid: 83928 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83928 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83928 ']' 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.518 23:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.778 [2024-11-18 23:05:49.933888] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:30.778 [2024-11-18 23:05:49.934096] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.778 [2024-11-18 23:05:50.091550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.778 [2024-11-18 23:05:50.137024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.037 [2024-11-18 23:05:50.179163] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.037 [2024-11-18 23:05:50.179323] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.616 [2024-11-18 23:05:50.756491] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.616 [2024-11-18 23:05:50.756603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.616 [2024-11-18 23:05:50.756645] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.616 [2024-11-18 23:05:50.756669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.616 [2024-11-18 23:05:50.756688] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.616 [2024-11-18 23:05:50.756711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.616 [2024-11-18 23:05:50.756728] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.616 [2024-11-18 23:05:50.756749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.616 "name": "Existed_Raid", 00:10:31.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.616 "strip_size_kb": 0, 00:10:31.616 "state": "configuring", 00:10:31.616 "raid_level": "raid1", 00:10:31.616 "superblock": false, 00:10:31.616 "num_base_bdevs": 4, 00:10:31.616 "num_base_bdevs_discovered": 0, 00:10:31.616 "num_base_bdevs_operational": 4, 00:10:31.616 "base_bdevs_list": [ 00:10:31.616 { 00:10:31.616 "name": "BaseBdev1", 00:10:31.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.616 "is_configured": false, 00:10:31.616 "data_offset": 0, 00:10:31.616 "data_size": 0 00:10:31.616 }, 00:10:31.616 { 00:10:31.616 "name": "BaseBdev2", 00:10:31.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.616 "is_configured": false, 00:10:31.616 "data_offset": 0, 00:10:31.616 "data_size": 0 00:10:31.616 }, 00:10:31.616 { 00:10:31.616 "name": "BaseBdev3", 00:10:31.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.616 "is_configured": false, 00:10:31.616 "data_offset": 0, 00:10:31.616 "data_size": 0 00:10:31.616 }, 00:10:31.616 { 00:10:31.616 "name": "BaseBdev4", 00:10:31.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.616 "is_configured": false, 00:10:31.616 "data_offset": 0, 00:10:31.616 "data_size": 0 00:10:31.616 } 00:10:31.616 ] 00:10:31.616 }' 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.616 23:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.877 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.877 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.877 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.877 [2024-11-18 23:05:51.219628] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.877 [2024-11-18 23:05:51.219675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:31.877 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.877 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.877 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.877 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.877 [2024-11-18 23:05:51.231635] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.877 [2024-11-18 23:05:51.231677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.877 [2024-11-18 23:05:51.231686] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.877 [2024-11-18 23:05:51.231695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.877 [2024-11-18 23:05:51.231701] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.877 [2024-11-18 23:05:51.231710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.877 [2024-11-18 23:05:51.231715] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.877 [2024-11-18 23:05:51.231723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.877 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.877 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.877 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.877 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.877 [2024-11-18 23:05:51.252526] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.137 BaseBdev1 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.137 [ 00:10:32.137 { 00:10:32.137 "name": "BaseBdev1", 00:10:32.137 "aliases": [ 00:10:32.137 "661a6ebb-feed-497c-8a12-f2c32bff20c0" 00:10:32.137 ], 00:10:32.137 "product_name": "Malloc disk", 00:10:32.137 "block_size": 512, 00:10:32.137 "num_blocks": 65536, 00:10:32.137 "uuid": "661a6ebb-feed-497c-8a12-f2c32bff20c0", 00:10:32.137 "assigned_rate_limits": { 00:10:32.137 "rw_ios_per_sec": 0, 00:10:32.137 "rw_mbytes_per_sec": 0, 00:10:32.137 "r_mbytes_per_sec": 0, 00:10:32.137 "w_mbytes_per_sec": 0 00:10:32.137 }, 00:10:32.137 "claimed": true, 00:10:32.137 "claim_type": "exclusive_write", 00:10:32.137 "zoned": false, 00:10:32.137 "supported_io_types": { 00:10:32.137 "read": true, 00:10:32.137 "write": true, 00:10:32.137 "unmap": true, 00:10:32.137 "flush": true, 00:10:32.137 "reset": true, 00:10:32.137 "nvme_admin": false, 00:10:32.137 "nvme_io": false, 00:10:32.137 "nvme_io_md": false, 00:10:32.137 "write_zeroes": true, 00:10:32.137 "zcopy": true, 00:10:32.137 "get_zone_info": false, 00:10:32.137 "zone_management": false, 00:10:32.137 "zone_append": false, 00:10:32.137 "compare": false, 00:10:32.137 "compare_and_write": false, 00:10:32.137 "abort": true, 00:10:32.137 "seek_hole": false, 00:10:32.137 "seek_data": false, 00:10:32.137 "copy": true, 00:10:32.137 "nvme_iov_md": false 00:10:32.137 }, 00:10:32.137 "memory_domains": [ 00:10:32.137 { 00:10:32.137 "dma_device_id": "system", 00:10:32.137 "dma_device_type": 1 00:10:32.137 }, 00:10:32.137 { 00:10:32.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.137 "dma_device_type": 2 00:10:32.137 } 00:10:32.137 ], 00:10:32.137 "driver_specific": {} 00:10:32.137 } 00:10:32.137 ] 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.137 "name": "Existed_Raid", 00:10:32.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.137 "strip_size_kb": 0, 00:10:32.137 "state": "configuring", 00:10:32.137 "raid_level": "raid1", 00:10:32.137 "superblock": false, 00:10:32.137 "num_base_bdevs": 4, 00:10:32.137 "num_base_bdevs_discovered": 1, 00:10:32.137 "num_base_bdevs_operational": 4, 00:10:32.137 "base_bdevs_list": [ 00:10:32.137 { 00:10:32.137 "name": "BaseBdev1", 00:10:32.137 "uuid": "661a6ebb-feed-497c-8a12-f2c32bff20c0", 00:10:32.137 "is_configured": true, 00:10:32.137 "data_offset": 0, 00:10:32.137 "data_size": 65536 00:10:32.137 }, 00:10:32.137 { 00:10:32.137 "name": "BaseBdev2", 00:10:32.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.137 "is_configured": false, 00:10:32.137 "data_offset": 0, 00:10:32.137 "data_size": 0 00:10:32.137 }, 00:10:32.137 { 00:10:32.137 "name": "BaseBdev3", 00:10:32.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.137 "is_configured": false, 00:10:32.137 "data_offset": 0, 00:10:32.137 "data_size": 0 00:10:32.137 }, 00:10:32.137 { 00:10:32.137 "name": "BaseBdev4", 00:10:32.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.137 "is_configured": false, 00:10:32.137 "data_offset": 0, 00:10:32.137 "data_size": 0 00:10:32.137 } 00:10:32.137 ] 00:10:32.137 }' 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.137 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.397 [2024-11-18 23:05:51.727804] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.397 [2024-11-18 23:05:51.727853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.397 [2024-11-18 23:05:51.739804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.397 [2024-11-18 23:05:51.741639] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.397 [2024-11-18 23:05:51.741675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.397 [2024-11-18 23:05:51.741684] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.397 [2024-11-18 23:05:51.741692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.397 [2024-11-18 23:05:51.741697] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.397 [2024-11-18 23:05:51.741705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.397 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.657 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.657 "name": "Existed_Raid", 00:10:32.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.657 "strip_size_kb": 0, 00:10:32.657 "state": "configuring", 00:10:32.657 "raid_level": "raid1", 00:10:32.657 "superblock": false, 00:10:32.657 "num_base_bdevs": 4, 00:10:32.657 "num_base_bdevs_discovered": 1, 00:10:32.657 "num_base_bdevs_operational": 4, 00:10:32.657 "base_bdevs_list": [ 00:10:32.657 { 00:10:32.657 "name": "BaseBdev1", 00:10:32.657 "uuid": "661a6ebb-feed-497c-8a12-f2c32bff20c0", 00:10:32.657 "is_configured": true, 00:10:32.657 "data_offset": 0, 00:10:32.657 "data_size": 65536 00:10:32.657 }, 00:10:32.657 { 00:10:32.657 "name": "BaseBdev2", 00:10:32.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.657 "is_configured": false, 00:10:32.657 "data_offset": 0, 00:10:32.657 "data_size": 0 00:10:32.657 }, 00:10:32.657 { 00:10:32.657 "name": "BaseBdev3", 00:10:32.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.657 "is_configured": false, 00:10:32.657 "data_offset": 0, 00:10:32.657 "data_size": 0 00:10:32.657 }, 00:10:32.657 { 00:10:32.657 "name": "BaseBdev4", 00:10:32.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.657 "is_configured": false, 00:10:32.657 "data_offset": 0, 00:10:32.657 "data_size": 0 00:10:32.657 } 00:10:32.657 ] 00:10:32.657 }' 00:10:32.657 23:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.657 23:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.919 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:32.919 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.919 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.919 [2024-11-18 23:05:52.131854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.919 BaseBdev2 00:10:32.919 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.919 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:32.919 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.920 [ 00:10:32.920 { 00:10:32.920 "name": "BaseBdev2", 00:10:32.920 "aliases": [ 00:10:32.920 "b5edaa84-3fab-441a-b87d-4767f174b1cd" 00:10:32.920 ], 00:10:32.920 "product_name": "Malloc disk", 00:10:32.920 "block_size": 512, 00:10:32.920 "num_blocks": 65536, 00:10:32.920 "uuid": "b5edaa84-3fab-441a-b87d-4767f174b1cd", 00:10:32.920 "assigned_rate_limits": { 00:10:32.920 "rw_ios_per_sec": 0, 00:10:32.920 "rw_mbytes_per_sec": 0, 00:10:32.920 "r_mbytes_per_sec": 0, 00:10:32.920 "w_mbytes_per_sec": 0 00:10:32.920 }, 00:10:32.920 "claimed": true, 00:10:32.920 "claim_type": "exclusive_write", 00:10:32.920 "zoned": false, 00:10:32.920 "supported_io_types": { 00:10:32.920 "read": true, 00:10:32.920 "write": true, 00:10:32.920 "unmap": true, 00:10:32.920 "flush": true, 00:10:32.920 "reset": true, 00:10:32.920 "nvme_admin": false, 00:10:32.920 "nvme_io": false, 00:10:32.920 "nvme_io_md": false, 00:10:32.920 "write_zeroes": true, 00:10:32.920 "zcopy": true, 00:10:32.920 "get_zone_info": false, 00:10:32.920 "zone_management": false, 00:10:32.920 "zone_append": false, 00:10:32.920 "compare": false, 00:10:32.920 "compare_and_write": false, 00:10:32.920 "abort": true, 00:10:32.920 "seek_hole": false, 00:10:32.920 "seek_data": false, 00:10:32.920 "copy": true, 00:10:32.920 "nvme_iov_md": false 00:10:32.920 }, 00:10:32.920 "memory_domains": [ 00:10:32.920 { 00:10:32.920 "dma_device_id": "system", 00:10:32.920 "dma_device_type": 1 00:10:32.920 }, 00:10:32.920 { 00:10:32.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.920 "dma_device_type": 2 00:10:32.920 } 00:10:32.920 ], 00:10:32.920 "driver_specific": {} 00:10:32.920 } 00:10:32.920 ] 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.920 "name": "Existed_Raid", 00:10:32.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.920 "strip_size_kb": 0, 00:10:32.920 "state": "configuring", 00:10:32.920 "raid_level": "raid1", 00:10:32.920 "superblock": false, 00:10:32.920 "num_base_bdevs": 4, 00:10:32.920 "num_base_bdevs_discovered": 2, 00:10:32.920 "num_base_bdevs_operational": 4, 00:10:32.920 "base_bdevs_list": [ 00:10:32.920 { 00:10:32.920 "name": "BaseBdev1", 00:10:32.920 "uuid": "661a6ebb-feed-497c-8a12-f2c32bff20c0", 00:10:32.920 "is_configured": true, 00:10:32.920 "data_offset": 0, 00:10:32.920 "data_size": 65536 00:10:32.920 }, 00:10:32.920 { 00:10:32.920 "name": "BaseBdev2", 00:10:32.920 "uuid": "b5edaa84-3fab-441a-b87d-4767f174b1cd", 00:10:32.920 "is_configured": true, 00:10:32.920 "data_offset": 0, 00:10:32.920 "data_size": 65536 00:10:32.920 }, 00:10:32.920 { 00:10:32.920 "name": "BaseBdev3", 00:10:32.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.920 "is_configured": false, 00:10:32.920 "data_offset": 0, 00:10:32.920 "data_size": 0 00:10:32.920 }, 00:10:32.920 { 00:10:32.920 "name": "BaseBdev4", 00:10:32.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.920 "is_configured": false, 00:10:32.920 "data_offset": 0, 00:10:32.920 "data_size": 0 00:10:32.920 } 00:10:32.920 ] 00:10:32.920 }' 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.920 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.490 [2024-11-18 23:05:52.638257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.490 BaseBdev3 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.490 [ 00:10:33.490 { 00:10:33.490 "name": "BaseBdev3", 00:10:33.490 "aliases": [ 00:10:33.490 "1d52bca5-acd8-4a14-8237-aad1a5a4d4e9" 00:10:33.490 ], 00:10:33.490 "product_name": "Malloc disk", 00:10:33.490 "block_size": 512, 00:10:33.490 "num_blocks": 65536, 00:10:33.490 "uuid": "1d52bca5-acd8-4a14-8237-aad1a5a4d4e9", 00:10:33.490 "assigned_rate_limits": { 00:10:33.490 "rw_ios_per_sec": 0, 00:10:33.490 "rw_mbytes_per_sec": 0, 00:10:33.490 "r_mbytes_per_sec": 0, 00:10:33.490 "w_mbytes_per_sec": 0 00:10:33.490 }, 00:10:33.490 "claimed": true, 00:10:33.490 "claim_type": "exclusive_write", 00:10:33.490 "zoned": false, 00:10:33.490 "supported_io_types": { 00:10:33.490 "read": true, 00:10:33.490 "write": true, 00:10:33.490 "unmap": true, 00:10:33.490 "flush": true, 00:10:33.490 "reset": true, 00:10:33.490 "nvme_admin": false, 00:10:33.490 "nvme_io": false, 00:10:33.490 "nvme_io_md": false, 00:10:33.490 "write_zeroes": true, 00:10:33.490 "zcopy": true, 00:10:33.490 "get_zone_info": false, 00:10:33.490 "zone_management": false, 00:10:33.490 "zone_append": false, 00:10:33.490 "compare": false, 00:10:33.490 "compare_and_write": false, 00:10:33.490 "abort": true, 00:10:33.490 "seek_hole": false, 00:10:33.490 "seek_data": false, 00:10:33.490 "copy": true, 00:10:33.490 "nvme_iov_md": false 00:10:33.490 }, 00:10:33.490 "memory_domains": [ 00:10:33.490 { 00:10:33.490 "dma_device_id": "system", 00:10:33.490 "dma_device_type": 1 00:10:33.490 }, 00:10:33.490 { 00:10:33.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.490 "dma_device_type": 2 00:10:33.490 } 00:10:33.490 ], 00:10:33.490 "driver_specific": {} 00:10:33.490 } 00:10:33.490 ] 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.490 "name": "Existed_Raid", 00:10:33.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.490 "strip_size_kb": 0, 00:10:33.490 "state": "configuring", 00:10:33.490 "raid_level": "raid1", 00:10:33.490 "superblock": false, 00:10:33.490 "num_base_bdevs": 4, 00:10:33.490 "num_base_bdevs_discovered": 3, 00:10:33.490 "num_base_bdevs_operational": 4, 00:10:33.490 "base_bdevs_list": [ 00:10:33.490 { 00:10:33.490 "name": "BaseBdev1", 00:10:33.490 "uuid": "661a6ebb-feed-497c-8a12-f2c32bff20c0", 00:10:33.490 "is_configured": true, 00:10:33.490 "data_offset": 0, 00:10:33.490 "data_size": 65536 00:10:33.490 }, 00:10:33.490 { 00:10:33.490 "name": "BaseBdev2", 00:10:33.490 "uuid": "b5edaa84-3fab-441a-b87d-4767f174b1cd", 00:10:33.490 "is_configured": true, 00:10:33.490 "data_offset": 0, 00:10:33.490 "data_size": 65536 00:10:33.490 }, 00:10:33.490 { 00:10:33.490 "name": "BaseBdev3", 00:10:33.490 "uuid": "1d52bca5-acd8-4a14-8237-aad1a5a4d4e9", 00:10:33.490 "is_configured": true, 00:10:33.490 "data_offset": 0, 00:10:33.490 "data_size": 65536 00:10:33.490 }, 00:10:33.490 { 00:10:33.490 "name": "BaseBdev4", 00:10:33.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.490 "is_configured": false, 00:10:33.490 "data_offset": 0, 00:10:33.490 "data_size": 0 00:10:33.490 } 00:10:33.490 ] 00:10:33.490 }' 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.490 23:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.057 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:34.057 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.057 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.057 [2024-11-18 23:05:53.160275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:34.057 [2024-11-18 23:05:53.160344] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:34.057 [2024-11-18 23:05:53.160352] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:34.057 [2024-11-18 23:05:53.160639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:34.058 [2024-11-18 23:05:53.160785] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:34.058 [2024-11-18 23:05:53.160798] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:34.058 [2024-11-18 23:05:53.161005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.058 BaseBdev4 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.058 [ 00:10:34.058 { 00:10:34.058 "name": "BaseBdev4", 00:10:34.058 "aliases": [ 00:10:34.058 "865a232a-7659-4cd4-9816-aa9d7305ded5" 00:10:34.058 ], 00:10:34.058 "product_name": "Malloc disk", 00:10:34.058 "block_size": 512, 00:10:34.058 "num_blocks": 65536, 00:10:34.058 "uuid": "865a232a-7659-4cd4-9816-aa9d7305ded5", 00:10:34.058 "assigned_rate_limits": { 00:10:34.058 "rw_ios_per_sec": 0, 00:10:34.058 "rw_mbytes_per_sec": 0, 00:10:34.058 "r_mbytes_per_sec": 0, 00:10:34.058 "w_mbytes_per_sec": 0 00:10:34.058 }, 00:10:34.058 "claimed": true, 00:10:34.058 "claim_type": "exclusive_write", 00:10:34.058 "zoned": false, 00:10:34.058 "supported_io_types": { 00:10:34.058 "read": true, 00:10:34.058 "write": true, 00:10:34.058 "unmap": true, 00:10:34.058 "flush": true, 00:10:34.058 "reset": true, 00:10:34.058 "nvme_admin": false, 00:10:34.058 "nvme_io": false, 00:10:34.058 "nvme_io_md": false, 00:10:34.058 "write_zeroes": true, 00:10:34.058 "zcopy": true, 00:10:34.058 "get_zone_info": false, 00:10:34.058 "zone_management": false, 00:10:34.058 "zone_append": false, 00:10:34.058 "compare": false, 00:10:34.058 "compare_and_write": false, 00:10:34.058 "abort": true, 00:10:34.058 "seek_hole": false, 00:10:34.058 "seek_data": false, 00:10:34.058 "copy": true, 00:10:34.058 "nvme_iov_md": false 00:10:34.058 }, 00:10:34.058 "memory_domains": [ 00:10:34.058 { 00:10:34.058 "dma_device_id": "system", 00:10:34.058 "dma_device_type": 1 00:10:34.058 }, 00:10:34.058 { 00:10:34.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.058 "dma_device_type": 2 00:10:34.058 } 00:10:34.058 ], 00:10:34.058 "driver_specific": {} 00:10:34.058 } 00:10:34.058 ] 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.058 "name": "Existed_Raid", 00:10:34.058 "uuid": "62b6632a-0e04-4406-9201-8d88643e8a97", 00:10:34.058 "strip_size_kb": 0, 00:10:34.058 "state": "online", 00:10:34.058 "raid_level": "raid1", 00:10:34.058 "superblock": false, 00:10:34.058 "num_base_bdevs": 4, 00:10:34.058 "num_base_bdevs_discovered": 4, 00:10:34.058 "num_base_bdevs_operational": 4, 00:10:34.058 "base_bdevs_list": [ 00:10:34.058 { 00:10:34.058 "name": "BaseBdev1", 00:10:34.058 "uuid": "661a6ebb-feed-497c-8a12-f2c32bff20c0", 00:10:34.058 "is_configured": true, 00:10:34.058 "data_offset": 0, 00:10:34.058 "data_size": 65536 00:10:34.058 }, 00:10:34.058 { 00:10:34.058 "name": "BaseBdev2", 00:10:34.058 "uuid": "b5edaa84-3fab-441a-b87d-4767f174b1cd", 00:10:34.058 "is_configured": true, 00:10:34.058 "data_offset": 0, 00:10:34.058 "data_size": 65536 00:10:34.058 }, 00:10:34.058 { 00:10:34.058 "name": "BaseBdev3", 00:10:34.058 "uuid": "1d52bca5-acd8-4a14-8237-aad1a5a4d4e9", 00:10:34.058 "is_configured": true, 00:10:34.058 "data_offset": 0, 00:10:34.058 "data_size": 65536 00:10:34.058 }, 00:10:34.058 { 00:10:34.058 "name": "BaseBdev4", 00:10:34.058 "uuid": "865a232a-7659-4cd4-9816-aa9d7305ded5", 00:10:34.058 "is_configured": true, 00:10:34.058 "data_offset": 0, 00:10:34.058 "data_size": 65536 00:10:34.058 } 00:10:34.058 ] 00:10:34.058 }' 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.058 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.317 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.317 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.317 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.317 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.317 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.317 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.317 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.317 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.317 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.317 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.317 [2024-11-18 23:05:53.663762] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.317 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.577 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.577 "name": "Existed_Raid", 00:10:34.577 "aliases": [ 00:10:34.577 "62b6632a-0e04-4406-9201-8d88643e8a97" 00:10:34.577 ], 00:10:34.577 "product_name": "Raid Volume", 00:10:34.577 "block_size": 512, 00:10:34.577 "num_blocks": 65536, 00:10:34.577 "uuid": "62b6632a-0e04-4406-9201-8d88643e8a97", 00:10:34.577 "assigned_rate_limits": { 00:10:34.577 "rw_ios_per_sec": 0, 00:10:34.577 "rw_mbytes_per_sec": 0, 00:10:34.577 "r_mbytes_per_sec": 0, 00:10:34.577 "w_mbytes_per_sec": 0 00:10:34.577 }, 00:10:34.577 "claimed": false, 00:10:34.577 "zoned": false, 00:10:34.577 "supported_io_types": { 00:10:34.577 "read": true, 00:10:34.577 "write": true, 00:10:34.577 "unmap": false, 00:10:34.577 "flush": false, 00:10:34.577 "reset": true, 00:10:34.577 "nvme_admin": false, 00:10:34.577 "nvme_io": false, 00:10:34.577 "nvme_io_md": false, 00:10:34.577 "write_zeroes": true, 00:10:34.577 "zcopy": false, 00:10:34.577 "get_zone_info": false, 00:10:34.577 "zone_management": false, 00:10:34.577 "zone_append": false, 00:10:34.577 "compare": false, 00:10:34.577 "compare_and_write": false, 00:10:34.577 "abort": false, 00:10:34.577 "seek_hole": false, 00:10:34.577 "seek_data": false, 00:10:34.577 "copy": false, 00:10:34.577 "nvme_iov_md": false 00:10:34.577 }, 00:10:34.577 "memory_domains": [ 00:10:34.577 { 00:10:34.577 "dma_device_id": "system", 00:10:34.577 "dma_device_type": 1 00:10:34.577 }, 00:10:34.577 { 00:10:34.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.577 "dma_device_type": 2 00:10:34.577 }, 00:10:34.577 { 00:10:34.577 "dma_device_id": "system", 00:10:34.577 "dma_device_type": 1 00:10:34.577 }, 00:10:34.577 { 00:10:34.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.577 "dma_device_type": 2 00:10:34.577 }, 00:10:34.577 { 00:10:34.577 "dma_device_id": "system", 00:10:34.577 "dma_device_type": 1 00:10:34.577 }, 00:10:34.577 { 00:10:34.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.577 "dma_device_type": 2 00:10:34.577 }, 00:10:34.577 { 00:10:34.577 "dma_device_id": "system", 00:10:34.577 "dma_device_type": 1 00:10:34.577 }, 00:10:34.577 { 00:10:34.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.577 "dma_device_type": 2 00:10:34.577 } 00:10:34.577 ], 00:10:34.577 "driver_specific": { 00:10:34.577 "raid": { 00:10:34.577 "uuid": "62b6632a-0e04-4406-9201-8d88643e8a97", 00:10:34.577 "strip_size_kb": 0, 00:10:34.577 "state": "online", 00:10:34.577 "raid_level": "raid1", 00:10:34.577 "superblock": false, 00:10:34.577 "num_base_bdevs": 4, 00:10:34.577 "num_base_bdevs_discovered": 4, 00:10:34.577 "num_base_bdevs_operational": 4, 00:10:34.577 "base_bdevs_list": [ 00:10:34.577 { 00:10:34.577 "name": "BaseBdev1", 00:10:34.577 "uuid": "661a6ebb-feed-497c-8a12-f2c32bff20c0", 00:10:34.577 "is_configured": true, 00:10:34.577 "data_offset": 0, 00:10:34.577 "data_size": 65536 00:10:34.577 }, 00:10:34.577 { 00:10:34.577 "name": "BaseBdev2", 00:10:34.577 "uuid": "b5edaa84-3fab-441a-b87d-4767f174b1cd", 00:10:34.577 "is_configured": true, 00:10:34.577 "data_offset": 0, 00:10:34.577 "data_size": 65536 00:10:34.577 }, 00:10:34.577 { 00:10:34.577 "name": "BaseBdev3", 00:10:34.577 "uuid": "1d52bca5-acd8-4a14-8237-aad1a5a4d4e9", 00:10:34.577 "is_configured": true, 00:10:34.577 "data_offset": 0, 00:10:34.577 "data_size": 65536 00:10:34.577 }, 00:10:34.577 { 00:10:34.577 "name": "BaseBdev4", 00:10:34.577 "uuid": "865a232a-7659-4cd4-9816-aa9d7305ded5", 00:10:34.577 "is_configured": true, 00:10:34.577 "data_offset": 0, 00:10:34.577 "data_size": 65536 00:10:34.577 } 00:10:34.577 ] 00:10:34.577 } 00:10:34.577 } 00:10:34.577 }' 00:10:34.577 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.577 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:34.577 BaseBdev2 00:10:34.577 BaseBdev3 00:10:34.577 BaseBdev4' 00:10:34.577 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.578 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.838 [2024-11-18 23:05:53.983375] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.838 23:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.838 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.838 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.838 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.838 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.838 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.838 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.838 "name": "Existed_Raid", 00:10:34.838 "uuid": "62b6632a-0e04-4406-9201-8d88643e8a97", 00:10:34.838 "strip_size_kb": 0, 00:10:34.838 "state": "online", 00:10:34.838 "raid_level": "raid1", 00:10:34.838 "superblock": false, 00:10:34.838 "num_base_bdevs": 4, 00:10:34.838 "num_base_bdevs_discovered": 3, 00:10:34.838 "num_base_bdevs_operational": 3, 00:10:34.838 "base_bdevs_list": [ 00:10:34.838 { 00:10:34.838 "name": null, 00:10:34.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.838 "is_configured": false, 00:10:34.838 "data_offset": 0, 00:10:34.838 "data_size": 65536 00:10:34.838 }, 00:10:34.838 { 00:10:34.838 "name": "BaseBdev2", 00:10:34.838 "uuid": "b5edaa84-3fab-441a-b87d-4767f174b1cd", 00:10:34.838 "is_configured": true, 00:10:34.838 "data_offset": 0, 00:10:34.838 "data_size": 65536 00:10:34.838 }, 00:10:34.838 { 00:10:34.838 "name": "BaseBdev3", 00:10:34.838 "uuid": "1d52bca5-acd8-4a14-8237-aad1a5a4d4e9", 00:10:34.838 "is_configured": true, 00:10:34.838 "data_offset": 0, 00:10:34.838 "data_size": 65536 00:10:34.838 }, 00:10:34.838 { 00:10:34.838 "name": "BaseBdev4", 00:10:34.838 "uuid": "865a232a-7659-4cd4-9816-aa9d7305ded5", 00:10:34.838 "is_configured": true, 00:10:34.838 "data_offset": 0, 00:10:34.838 "data_size": 65536 00:10:34.838 } 00:10:34.838 ] 00:10:34.838 }' 00:10:34.838 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.838 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.098 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:35.098 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.098 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.098 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.098 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.098 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.098 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.358 [2024-11-18 23:05:54.481812] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.358 [2024-11-18 23:05:54.548911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.358 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.358 [2024-11-18 23:05:54.620089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:35.358 [2024-11-18 23:05:54.620220] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.358 [2024-11-18 23:05:54.631604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.358 [2024-11-18 23:05:54.631709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.358 [2024-11-18 23:05:54.631762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.359 BaseBdev2 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.359 [ 00:10:35.359 { 00:10:35.359 "name": "BaseBdev2", 00:10:35.359 "aliases": [ 00:10:35.359 "c6afa716-11f9-49cd-ada4-f36cd3ca61e9" 00:10:35.359 ], 00:10:35.359 "product_name": "Malloc disk", 00:10:35.359 "block_size": 512, 00:10:35.359 "num_blocks": 65536, 00:10:35.359 "uuid": "c6afa716-11f9-49cd-ada4-f36cd3ca61e9", 00:10:35.359 "assigned_rate_limits": { 00:10:35.359 "rw_ios_per_sec": 0, 00:10:35.359 "rw_mbytes_per_sec": 0, 00:10:35.359 "r_mbytes_per_sec": 0, 00:10:35.359 "w_mbytes_per_sec": 0 00:10:35.359 }, 00:10:35.359 "claimed": false, 00:10:35.359 "zoned": false, 00:10:35.359 "supported_io_types": { 00:10:35.359 "read": true, 00:10:35.359 "write": true, 00:10:35.359 "unmap": true, 00:10:35.359 "flush": true, 00:10:35.359 "reset": true, 00:10:35.359 "nvme_admin": false, 00:10:35.359 "nvme_io": false, 00:10:35.359 "nvme_io_md": false, 00:10:35.359 "write_zeroes": true, 00:10:35.359 "zcopy": true, 00:10:35.359 "get_zone_info": false, 00:10:35.359 "zone_management": false, 00:10:35.359 "zone_append": false, 00:10:35.359 "compare": false, 00:10:35.359 "compare_and_write": false, 00:10:35.359 "abort": true, 00:10:35.359 "seek_hole": false, 00:10:35.359 "seek_data": false, 00:10:35.359 "copy": true, 00:10:35.359 "nvme_iov_md": false 00:10:35.359 }, 00:10:35.359 "memory_domains": [ 00:10:35.359 { 00:10:35.359 "dma_device_id": "system", 00:10:35.359 "dma_device_type": 1 00:10:35.359 }, 00:10:35.359 { 00:10:35.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.359 "dma_device_type": 2 00:10:35.359 } 00:10:35.359 ], 00:10:35.359 "driver_specific": {} 00:10:35.359 } 00:10:35.359 ] 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.359 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.618 BaseBdev3 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.618 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.618 [ 00:10:35.618 { 00:10:35.618 "name": "BaseBdev3", 00:10:35.618 "aliases": [ 00:10:35.618 "d2e1eecf-59b2-4722-a40e-22c6dbfe945f" 00:10:35.618 ], 00:10:35.618 "product_name": "Malloc disk", 00:10:35.618 "block_size": 512, 00:10:35.618 "num_blocks": 65536, 00:10:35.618 "uuid": "d2e1eecf-59b2-4722-a40e-22c6dbfe945f", 00:10:35.618 "assigned_rate_limits": { 00:10:35.618 "rw_ios_per_sec": 0, 00:10:35.618 "rw_mbytes_per_sec": 0, 00:10:35.618 "r_mbytes_per_sec": 0, 00:10:35.618 "w_mbytes_per_sec": 0 00:10:35.618 }, 00:10:35.618 "claimed": false, 00:10:35.619 "zoned": false, 00:10:35.619 "supported_io_types": { 00:10:35.619 "read": true, 00:10:35.619 "write": true, 00:10:35.619 "unmap": true, 00:10:35.619 "flush": true, 00:10:35.619 "reset": true, 00:10:35.619 "nvme_admin": false, 00:10:35.619 "nvme_io": false, 00:10:35.619 "nvme_io_md": false, 00:10:35.619 "write_zeroes": true, 00:10:35.619 "zcopy": true, 00:10:35.619 "get_zone_info": false, 00:10:35.619 "zone_management": false, 00:10:35.619 "zone_append": false, 00:10:35.619 "compare": false, 00:10:35.619 "compare_and_write": false, 00:10:35.619 "abort": true, 00:10:35.619 "seek_hole": false, 00:10:35.619 "seek_data": false, 00:10:35.619 "copy": true, 00:10:35.619 "nvme_iov_md": false 00:10:35.619 }, 00:10:35.619 "memory_domains": [ 00:10:35.619 { 00:10:35.619 "dma_device_id": "system", 00:10:35.619 "dma_device_type": 1 00:10:35.619 }, 00:10:35.619 { 00:10:35.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.619 "dma_device_type": 2 00:10:35.619 } 00:10:35.619 ], 00:10:35.619 "driver_specific": {} 00:10:35.619 } 00:10:35.619 ] 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 BaseBdev4 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 [ 00:10:35.619 { 00:10:35.619 "name": "BaseBdev4", 00:10:35.619 "aliases": [ 00:10:35.619 "4fac0110-b34b-4bd2-9075-e6199b711e77" 00:10:35.619 ], 00:10:35.619 "product_name": "Malloc disk", 00:10:35.619 "block_size": 512, 00:10:35.619 "num_blocks": 65536, 00:10:35.619 "uuid": "4fac0110-b34b-4bd2-9075-e6199b711e77", 00:10:35.619 "assigned_rate_limits": { 00:10:35.619 "rw_ios_per_sec": 0, 00:10:35.619 "rw_mbytes_per_sec": 0, 00:10:35.619 "r_mbytes_per_sec": 0, 00:10:35.619 "w_mbytes_per_sec": 0 00:10:35.619 }, 00:10:35.619 "claimed": false, 00:10:35.619 "zoned": false, 00:10:35.619 "supported_io_types": { 00:10:35.619 "read": true, 00:10:35.619 "write": true, 00:10:35.619 "unmap": true, 00:10:35.619 "flush": true, 00:10:35.619 "reset": true, 00:10:35.619 "nvme_admin": false, 00:10:35.619 "nvme_io": false, 00:10:35.619 "nvme_io_md": false, 00:10:35.619 "write_zeroes": true, 00:10:35.619 "zcopy": true, 00:10:35.619 "get_zone_info": false, 00:10:35.619 "zone_management": false, 00:10:35.619 "zone_append": false, 00:10:35.619 "compare": false, 00:10:35.619 "compare_and_write": false, 00:10:35.619 "abort": true, 00:10:35.619 "seek_hole": false, 00:10:35.619 "seek_data": false, 00:10:35.619 "copy": true, 00:10:35.619 "nvme_iov_md": false 00:10:35.619 }, 00:10:35.619 "memory_domains": [ 00:10:35.619 { 00:10:35.619 "dma_device_id": "system", 00:10:35.619 "dma_device_type": 1 00:10:35.619 }, 00:10:35.619 { 00:10:35.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.619 "dma_device_type": 2 00:10:35.619 } 00:10:35.619 ], 00:10:35.619 "driver_specific": {} 00:10:35.619 } 00:10:35.619 ] 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 [2024-11-18 23:05:54.831168] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.619 [2024-11-18 23:05:54.831281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.619 [2024-11-18 23:05:54.831339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.619 [2024-11-18 23:05:54.833143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.619 [2024-11-18 23:05:54.833239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.619 "name": "Existed_Raid", 00:10:35.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.619 "strip_size_kb": 0, 00:10:35.619 "state": "configuring", 00:10:35.619 "raid_level": "raid1", 00:10:35.619 "superblock": false, 00:10:35.619 "num_base_bdevs": 4, 00:10:35.619 "num_base_bdevs_discovered": 3, 00:10:35.619 "num_base_bdevs_operational": 4, 00:10:35.619 "base_bdevs_list": [ 00:10:35.619 { 00:10:35.619 "name": "BaseBdev1", 00:10:35.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.619 "is_configured": false, 00:10:35.619 "data_offset": 0, 00:10:35.619 "data_size": 0 00:10:35.619 }, 00:10:35.619 { 00:10:35.619 "name": "BaseBdev2", 00:10:35.619 "uuid": "c6afa716-11f9-49cd-ada4-f36cd3ca61e9", 00:10:35.619 "is_configured": true, 00:10:35.619 "data_offset": 0, 00:10:35.619 "data_size": 65536 00:10:35.619 }, 00:10:35.619 { 00:10:35.619 "name": "BaseBdev3", 00:10:35.619 "uuid": "d2e1eecf-59b2-4722-a40e-22c6dbfe945f", 00:10:35.619 "is_configured": true, 00:10:35.619 "data_offset": 0, 00:10:35.619 "data_size": 65536 00:10:35.619 }, 00:10:35.619 { 00:10:35.619 "name": "BaseBdev4", 00:10:35.619 "uuid": "4fac0110-b34b-4bd2-9075-e6199b711e77", 00:10:35.619 "is_configured": true, 00:10:35.619 "data_offset": 0, 00:10:35.619 "data_size": 65536 00:10:35.619 } 00:10:35.619 ] 00:10:35.619 }' 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.619 23:05:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.188 [2024-11-18 23:05:55.290371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.188 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.188 "name": "Existed_Raid", 00:10:36.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.188 "strip_size_kb": 0, 00:10:36.188 "state": "configuring", 00:10:36.188 "raid_level": "raid1", 00:10:36.188 "superblock": false, 00:10:36.188 "num_base_bdevs": 4, 00:10:36.188 "num_base_bdevs_discovered": 2, 00:10:36.188 "num_base_bdevs_operational": 4, 00:10:36.188 "base_bdevs_list": [ 00:10:36.188 { 00:10:36.188 "name": "BaseBdev1", 00:10:36.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.188 "is_configured": false, 00:10:36.188 "data_offset": 0, 00:10:36.188 "data_size": 0 00:10:36.188 }, 00:10:36.188 { 00:10:36.188 "name": null, 00:10:36.188 "uuid": "c6afa716-11f9-49cd-ada4-f36cd3ca61e9", 00:10:36.188 "is_configured": false, 00:10:36.188 "data_offset": 0, 00:10:36.188 "data_size": 65536 00:10:36.188 }, 00:10:36.188 { 00:10:36.188 "name": "BaseBdev3", 00:10:36.188 "uuid": "d2e1eecf-59b2-4722-a40e-22c6dbfe945f", 00:10:36.189 "is_configured": true, 00:10:36.189 "data_offset": 0, 00:10:36.189 "data_size": 65536 00:10:36.189 }, 00:10:36.189 { 00:10:36.189 "name": "BaseBdev4", 00:10:36.189 "uuid": "4fac0110-b34b-4bd2-9075-e6199b711e77", 00:10:36.189 "is_configured": true, 00:10:36.189 "data_offset": 0, 00:10:36.189 "data_size": 65536 00:10:36.189 } 00:10:36.189 ] 00:10:36.189 }' 00:10:36.189 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.189 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.448 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.448 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.448 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.448 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.448 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.448 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:36.448 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.448 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.448 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.449 [2024-11-18 23:05:55.748529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.449 BaseBdev1 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.449 [ 00:10:36.449 { 00:10:36.449 "name": "BaseBdev1", 00:10:36.449 "aliases": [ 00:10:36.449 "04172875-6276-405b-afef-6f3905da576f" 00:10:36.449 ], 00:10:36.449 "product_name": "Malloc disk", 00:10:36.449 "block_size": 512, 00:10:36.449 "num_blocks": 65536, 00:10:36.449 "uuid": "04172875-6276-405b-afef-6f3905da576f", 00:10:36.449 "assigned_rate_limits": { 00:10:36.449 "rw_ios_per_sec": 0, 00:10:36.449 "rw_mbytes_per_sec": 0, 00:10:36.449 "r_mbytes_per_sec": 0, 00:10:36.449 "w_mbytes_per_sec": 0 00:10:36.449 }, 00:10:36.449 "claimed": true, 00:10:36.449 "claim_type": "exclusive_write", 00:10:36.449 "zoned": false, 00:10:36.449 "supported_io_types": { 00:10:36.449 "read": true, 00:10:36.449 "write": true, 00:10:36.449 "unmap": true, 00:10:36.449 "flush": true, 00:10:36.449 "reset": true, 00:10:36.449 "nvme_admin": false, 00:10:36.449 "nvme_io": false, 00:10:36.449 "nvme_io_md": false, 00:10:36.449 "write_zeroes": true, 00:10:36.449 "zcopy": true, 00:10:36.449 "get_zone_info": false, 00:10:36.449 "zone_management": false, 00:10:36.449 "zone_append": false, 00:10:36.449 "compare": false, 00:10:36.449 "compare_and_write": false, 00:10:36.449 "abort": true, 00:10:36.449 "seek_hole": false, 00:10:36.449 "seek_data": false, 00:10:36.449 "copy": true, 00:10:36.449 "nvme_iov_md": false 00:10:36.449 }, 00:10:36.449 "memory_domains": [ 00:10:36.449 { 00:10:36.449 "dma_device_id": "system", 00:10:36.449 "dma_device_type": 1 00:10:36.449 }, 00:10:36.449 { 00:10:36.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.449 "dma_device_type": 2 00:10:36.449 } 00:10:36.449 ], 00:10:36.449 "driver_specific": {} 00:10:36.449 } 00:10:36.449 ] 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.449 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.709 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.709 "name": "Existed_Raid", 00:10:36.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.709 "strip_size_kb": 0, 00:10:36.709 "state": "configuring", 00:10:36.709 "raid_level": "raid1", 00:10:36.709 "superblock": false, 00:10:36.709 "num_base_bdevs": 4, 00:10:36.709 "num_base_bdevs_discovered": 3, 00:10:36.709 "num_base_bdevs_operational": 4, 00:10:36.709 "base_bdevs_list": [ 00:10:36.709 { 00:10:36.709 "name": "BaseBdev1", 00:10:36.709 "uuid": "04172875-6276-405b-afef-6f3905da576f", 00:10:36.709 "is_configured": true, 00:10:36.709 "data_offset": 0, 00:10:36.709 "data_size": 65536 00:10:36.709 }, 00:10:36.709 { 00:10:36.709 "name": null, 00:10:36.709 "uuid": "c6afa716-11f9-49cd-ada4-f36cd3ca61e9", 00:10:36.709 "is_configured": false, 00:10:36.709 "data_offset": 0, 00:10:36.709 "data_size": 65536 00:10:36.709 }, 00:10:36.709 { 00:10:36.709 "name": "BaseBdev3", 00:10:36.709 "uuid": "d2e1eecf-59b2-4722-a40e-22c6dbfe945f", 00:10:36.709 "is_configured": true, 00:10:36.709 "data_offset": 0, 00:10:36.709 "data_size": 65536 00:10:36.709 }, 00:10:36.709 { 00:10:36.709 "name": "BaseBdev4", 00:10:36.709 "uuid": "4fac0110-b34b-4bd2-9075-e6199b711e77", 00:10:36.709 "is_configured": true, 00:10:36.709 "data_offset": 0, 00:10:36.709 "data_size": 65536 00:10:36.709 } 00:10:36.709 ] 00:10:36.709 }' 00:10:36.709 23:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.709 23:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.974 [2024-11-18 23:05:56.235742] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.974 "name": "Existed_Raid", 00:10:36.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.974 "strip_size_kb": 0, 00:10:36.974 "state": "configuring", 00:10:36.974 "raid_level": "raid1", 00:10:36.974 "superblock": false, 00:10:36.974 "num_base_bdevs": 4, 00:10:36.974 "num_base_bdevs_discovered": 2, 00:10:36.974 "num_base_bdevs_operational": 4, 00:10:36.974 "base_bdevs_list": [ 00:10:36.974 { 00:10:36.974 "name": "BaseBdev1", 00:10:36.974 "uuid": "04172875-6276-405b-afef-6f3905da576f", 00:10:36.974 "is_configured": true, 00:10:36.974 "data_offset": 0, 00:10:36.974 "data_size": 65536 00:10:36.974 }, 00:10:36.974 { 00:10:36.974 "name": null, 00:10:36.974 "uuid": "c6afa716-11f9-49cd-ada4-f36cd3ca61e9", 00:10:36.974 "is_configured": false, 00:10:36.974 "data_offset": 0, 00:10:36.974 "data_size": 65536 00:10:36.974 }, 00:10:36.974 { 00:10:36.974 "name": null, 00:10:36.974 "uuid": "d2e1eecf-59b2-4722-a40e-22c6dbfe945f", 00:10:36.974 "is_configured": false, 00:10:36.974 "data_offset": 0, 00:10:36.974 "data_size": 65536 00:10:36.974 }, 00:10:36.974 { 00:10:36.974 "name": "BaseBdev4", 00:10:36.974 "uuid": "4fac0110-b34b-4bd2-9075-e6199b711e77", 00:10:36.974 "is_configured": true, 00:10:36.974 "data_offset": 0, 00:10:36.974 "data_size": 65536 00:10:36.974 } 00:10:36.974 ] 00:10:36.974 }' 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.974 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.317 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.318 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.318 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.318 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.318 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.576 [2024-11-18 23:05:56.718970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.576 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.576 "name": "Existed_Raid", 00:10:37.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.576 "strip_size_kb": 0, 00:10:37.576 "state": "configuring", 00:10:37.576 "raid_level": "raid1", 00:10:37.576 "superblock": false, 00:10:37.576 "num_base_bdevs": 4, 00:10:37.576 "num_base_bdevs_discovered": 3, 00:10:37.576 "num_base_bdevs_operational": 4, 00:10:37.576 "base_bdevs_list": [ 00:10:37.576 { 00:10:37.576 "name": "BaseBdev1", 00:10:37.576 "uuid": "04172875-6276-405b-afef-6f3905da576f", 00:10:37.577 "is_configured": true, 00:10:37.577 "data_offset": 0, 00:10:37.577 "data_size": 65536 00:10:37.577 }, 00:10:37.577 { 00:10:37.577 "name": null, 00:10:37.577 "uuid": "c6afa716-11f9-49cd-ada4-f36cd3ca61e9", 00:10:37.577 "is_configured": false, 00:10:37.577 "data_offset": 0, 00:10:37.577 "data_size": 65536 00:10:37.577 }, 00:10:37.577 { 00:10:37.577 "name": "BaseBdev3", 00:10:37.577 "uuid": "d2e1eecf-59b2-4722-a40e-22c6dbfe945f", 00:10:37.577 "is_configured": true, 00:10:37.577 "data_offset": 0, 00:10:37.577 "data_size": 65536 00:10:37.577 }, 00:10:37.577 { 00:10:37.577 "name": "BaseBdev4", 00:10:37.577 "uuid": "4fac0110-b34b-4bd2-9075-e6199b711e77", 00:10:37.577 "is_configured": true, 00:10:37.577 "data_offset": 0, 00:10:37.577 "data_size": 65536 00:10:37.577 } 00:10:37.577 ] 00:10:37.577 }' 00:10:37.577 23:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.577 23:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.836 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.836 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.836 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.836 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.094 [2024-11-18 23:05:57.234091] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.094 "name": "Existed_Raid", 00:10:38.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.094 "strip_size_kb": 0, 00:10:38.094 "state": "configuring", 00:10:38.094 "raid_level": "raid1", 00:10:38.094 "superblock": false, 00:10:38.094 "num_base_bdevs": 4, 00:10:38.094 "num_base_bdevs_discovered": 2, 00:10:38.094 "num_base_bdevs_operational": 4, 00:10:38.094 "base_bdevs_list": [ 00:10:38.094 { 00:10:38.094 "name": null, 00:10:38.094 "uuid": "04172875-6276-405b-afef-6f3905da576f", 00:10:38.094 "is_configured": false, 00:10:38.094 "data_offset": 0, 00:10:38.094 "data_size": 65536 00:10:38.094 }, 00:10:38.094 { 00:10:38.094 "name": null, 00:10:38.094 "uuid": "c6afa716-11f9-49cd-ada4-f36cd3ca61e9", 00:10:38.094 "is_configured": false, 00:10:38.094 "data_offset": 0, 00:10:38.094 "data_size": 65536 00:10:38.094 }, 00:10:38.094 { 00:10:38.094 "name": "BaseBdev3", 00:10:38.094 "uuid": "d2e1eecf-59b2-4722-a40e-22c6dbfe945f", 00:10:38.094 "is_configured": true, 00:10:38.094 "data_offset": 0, 00:10:38.094 "data_size": 65536 00:10:38.094 }, 00:10:38.094 { 00:10:38.094 "name": "BaseBdev4", 00:10:38.094 "uuid": "4fac0110-b34b-4bd2-9075-e6199b711e77", 00:10:38.094 "is_configured": true, 00:10:38.094 "data_offset": 0, 00:10:38.094 "data_size": 65536 00:10:38.094 } 00:10:38.094 ] 00:10:38.094 }' 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.094 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.354 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.354 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.354 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.354 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.354 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.354 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:38.354 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:38.354 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.354 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.613 [2024-11-18 23:05:57.731782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.613 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.613 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:38.613 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.613 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.614 "name": "Existed_Raid", 00:10:38.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.614 "strip_size_kb": 0, 00:10:38.614 "state": "configuring", 00:10:38.614 "raid_level": "raid1", 00:10:38.614 "superblock": false, 00:10:38.614 "num_base_bdevs": 4, 00:10:38.614 "num_base_bdevs_discovered": 3, 00:10:38.614 "num_base_bdevs_operational": 4, 00:10:38.614 "base_bdevs_list": [ 00:10:38.614 { 00:10:38.614 "name": null, 00:10:38.614 "uuid": "04172875-6276-405b-afef-6f3905da576f", 00:10:38.614 "is_configured": false, 00:10:38.614 "data_offset": 0, 00:10:38.614 "data_size": 65536 00:10:38.614 }, 00:10:38.614 { 00:10:38.614 "name": "BaseBdev2", 00:10:38.614 "uuid": "c6afa716-11f9-49cd-ada4-f36cd3ca61e9", 00:10:38.614 "is_configured": true, 00:10:38.614 "data_offset": 0, 00:10:38.614 "data_size": 65536 00:10:38.614 }, 00:10:38.614 { 00:10:38.614 "name": "BaseBdev3", 00:10:38.614 "uuid": "d2e1eecf-59b2-4722-a40e-22c6dbfe945f", 00:10:38.614 "is_configured": true, 00:10:38.614 "data_offset": 0, 00:10:38.614 "data_size": 65536 00:10:38.614 }, 00:10:38.614 { 00:10:38.614 "name": "BaseBdev4", 00:10:38.614 "uuid": "4fac0110-b34b-4bd2-9075-e6199b711e77", 00:10:38.614 "is_configured": true, 00:10:38.614 "data_offset": 0, 00:10:38.614 "data_size": 65536 00:10:38.614 } 00:10:38.614 ] 00:10:38.614 }' 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.614 23:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 04172875-6276-405b-afef-6f3905da576f 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.874 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.134 [2024-11-18 23:05:58.258206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:39.134 [2024-11-18 23:05:58.258256] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:39.134 [2024-11-18 23:05:58.258267] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:39.134 [2024-11-18 23:05:58.258530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:39.134 [2024-11-18 23:05:58.258663] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:39.134 [2024-11-18 23:05:58.258679] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:39.134 [2024-11-18 23:05:58.258873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.134 NewBaseBdev 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.134 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.134 [ 00:10:39.134 { 00:10:39.134 "name": "NewBaseBdev", 00:10:39.134 "aliases": [ 00:10:39.134 "04172875-6276-405b-afef-6f3905da576f" 00:10:39.134 ], 00:10:39.134 "product_name": "Malloc disk", 00:10:39.134 "block_size": 512, 00:10:39.134 "num_blocks": 65536, 00:10:39.134 "uuid": "04172875-6276-405b-afef-6f3905da576f", 00:10:39.134 "assigned_rate_limits": { 00:10:39.134 "rw_ios_per_sec": 0, 00:10:39.134 "rw_mbytes_per_sec": 0, 00:10:39.134 "r_mbytes_per_sec": 0, 00:10:39.134 "w_mbytes_per_sec": 0 00:10:39.134 }, 00:10:39.134 "claimed": true, 00:10:39.134 "claim_type": "exclusive_write", 00:10:39.134 "zoned": false, 00:10:39.135 "supported_io_types": { 00:10:39.135 "read": true, 00:10:39.135 "write": true, 00:10:39.135 "unmap": true, 00:10:39.135 "flush": true, 00:10:39.135 "reset": true, 00:10:39.135 "nvme_admin": false, 00:10:39.135 "nvme_io": false, 00:10:39.135 "nvme_io_md": false, 00:10:39.135 "write_zeroes": true, 00:10:39.135 "zcopy": true, 00:10:39.135 "get_zone_info": false, 00:10:39.135 "zone_management": false, 00:10:39.135 "zone_append": false, 00:10:39.135 "compare": false, 00:10:39.135 "compare_and_write": false, 00:10:39.135 "abort": true, 00:10:39.135 "seek_hole": false, 00:10:39.135 "seek_data": false, 00:10:39.135 "copy": true, 00:10:39.135 "nvme_iov_md": false 00:10:39.135 }, 00:10:39.135 "memory_domains": [ 00:10:39.135 { 00:10:39.135 "dma_device_id": "system", 00:10:39.135 "dma_device_type": 1 00:10:39.135 }, 00:10:39.135 { 00:10:39.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.135 "dma_device_type": 2 00:10:39.135 } 00:10:39.135 ], 00:10:39.135 "driver_specific": {} 00:10:39.135 } 00:10:39.135 ] 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.135 "name": "Existed_Raid", 00:10:39.135 "uuid": "0ef74c7c-e2f6-490a-8760-0c289becbe59", 00:10:39.135 "strip_size_kb": 0, 00:10:39.135 "state": "online", 00:10:39.135 "raid_level": "raid1", 00:10:39.135 "superblock": false, 00:10:39.135 "num_base_bdevs": 4, 00:10:39.135 "num_base_bdevs_discovered": 4, 00:10:39.135 "num_base_bdevs_operational": 4, 00:10:39.135 "base_bdevs_list": [ 00:10:39.135 { 00:10:39.135 "name": "NewBaseBdev", 00:10:39.135 "uuid": "04172875-6276-405b-afef-6f3905da576f", 00:10:39.135 "is_configured": true, 00:10:39.135 "data_offset": 0, 00:10:39.135 "data_size": 65536 00:10:39.135 }, 00:10:39.135 { 00:10:39.135 "name": "BaseBdev2", 00:10:39.135 "uuid": "c6afa716-11f9-49cd-ada4-f36cd3ca61e9", 00:10:39.135 "is_configured": true, 00:10:39.135 "data_offset": 0, 00:10:39.135 "data_size": 65536 00:10:39.135 }, 00:10:39.135 { 00:10:39.135 "name": "BaseBdev3", 00:10:39.135 "uuid": "d2e1eecf-59b2-4722-a40e-22c6dbfe945f", 00:10:39.135 "is_configured": true, 00:10:39.135 "data_offset": 0, 00:10:39.135 "data_size": 65536 00:10:39.135 }, 00:10:39.135 { 00:10:39.135 "name": "BaseBdev4", 00:10:39.135 "uuid": "4fac0110-b34b-4bd2-9075-e6199b711e77", 00:10:39.135 "is_configured": true, 00:10:39.135 "data_offset": 0, 00:10:39.135 "data_size": 65536 00:10:39.135 } 00:10:39.135 ] 00:10:39.135 }' 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.135 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.395 [2024-11-18 23:05:58.725743] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.395 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.395 "name": "Existed_Raid", 00:10:39.395 "aliases": [ 00:10:39.395 "0ef74c7c-e2f6-490a-8760-0c289becbe59" 00:10:39.395 ], 00:10:39.395 "product_name": "Raid Volume", 00:10:39.395 "block_size": 512, 00:10:39.395 "num_blocks": 65536, 00:10:39.395 "uuid": "0ef74c7c-e2f6-490a-8760-0c289becbe59", 00:10:39.395 "assigned_rate_limits": { 00:10:39.395 "rw_ios_per_sec": 0, 00:10:39.395 "rw_mbytes_per_sec": 0, 00:10:39.395 "r_mbytes_per_sec": 0, 00:10:39.395 "w_mbytes_per_sec": 0 00:10:39.395 }, 00:10:39.395 "claimed": false, 00:10:39.395 "zoned": false, 00:10:39.395 "supported_io_types": { 00:10:39.395 "read": true, 00:10:39.395 "write": true, 00:10:39.395 "unmap": false, 00:10:39.395 "flush": false, 00:10:39.395 "reset": true, 00:10:39.395 "nvme_admin": false, 00:10:39.395 "nvme_io": false, 00:10:39.395 "nvme_io_md": false, 00:10:39.395 "write_zeroes": true, 00:10:39.395 "zcopy": false, 00:10:39.395 "get_zone_info": false, 00:10:39.395 "zone_management": false, 00:10:39.395 "zone_append": false, 00:10:39.395 "compare": false, 00:10:39.395 "compare_and_write": false, 00:10:39.395 "abort": false, 00:10:39.395 "seek_hole": false, 00:10:39.395 "seek_data": false, 00:10:39.395 "copy": false, 00:10:39.395 "nvme_iov_md": false 00:10:39.395 }, 00:10:39.395 "memory_domains": [ 00:10:39.395 { 00:10:39.395 "dma_device_id": "system", 00:10:39.395 "dma_device_type": 1 00:10:39.395 }, 00:10:39.395 { 00:10:39.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.395 "dma_device_type": 2 00:10:39.395 }, 00:10:39.395 { 00:10:39.395 "dma_device_id": "system", 00:10:39.395 "dma_device_type": 1 00:10:39.395 }, 00:10:39.396 { 00:10:39.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.396 "dma_device_type": 2 00:10:39.396 }, 00:10:39.396 { 00:10:39.396 "dma_device_id": "system", 00:10:39.396 "dma_device_type": 1 00:10:39.396 }, 00:10:39.396 { 00:10:39.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.396 "dma_device_type": 2 00:10:39.396 }, 00:10:39.396 { 00:10:39.396 "dma_device_id": "system", 00:10:39.396 "dma_device_type": 1 00:10:39.396 }, 00:10:39.396 { 00:10:39.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.396 "dma_device_type": 2 00:10:39.396 } 00:10:39.396 ], 00:10:39.396 "driver_specific": { 00:10:39.396 "raid": { 00:10:39.396 "uuid": "0ef74c7c-e2f6-490a-8760-0c289becbe59", 00:10:39.396 "strip_size_kb": 0, 00:10:39.396 "state": "online", 00:10:39.396 "raid_level": "raid1", 00:10:39.396 "superblock": false, 00:10:39.396 "num_base_bdevs": 4, 00:10:39.396 "num_base_bdevs_discovered": 4, 00:10:39.396 "num_base_bdevs_operational": 4, 00:10:39.396 "base_bdevs_list": [ 00:10:39.396 { 00:10:39.396 "name": "NewBaseBdev", 00:10:39.396 "uuid": "04172875-6276-405b-afef-6f3905da576f", 00:10:39.396 "is_configured": true, 00:10:39.396 "data_offset": 0, 00:10:39.396 "data_size": 65536 00:10:39.396 }, 00:10:39.396 { 00:10:39.396 "name": "BaseBdev2", 00:10:39.396 "uuid": "c6afa716-11f9-49cd-ada4-f36cd3ca61e9", 00:10:39.396 "is_configured": true, 00:10:39.396 "data_offset": 0, 00:10:39.396 "data_size": 65536 00:10:39.396 }, 00:10:39.396 { 00:10:39.396 "name": "BaseBdev3", 00:10:39.396 "uuid": "d2e1eecf-59b2-4722-a40e-22c6dbfe945f", 00:10:39.396 "is_configured": true, 00:10:39.396 "data_offset": 0, 00:10:39.396 "data_size": 65536 00:10:39.396 }, 00:10:39.396 { 00:10:39.396 "name": "BaseBdev4", 00:10:39.396 "uuid": "4fac0110-b34b-4bd2-9075-e6199b711e77", 00:10:39.396 "is_configured": true, 00:10:39.396 "data_offset": 0, 00:10:39.396 "data_size": 65536 00:10:39.396 } 00:10:39.396 ] 00:10:39.396 } 00:10:39.396 } 00:10:39.396 }' 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:39.668 BaseBdev2 00:10:39.668 BaseBdev3 00:10:39.668 BaseBdev4' 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.668 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.669 23:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.669 23:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.669 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.669 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.669 [2024-11-18 23:05:59.008950] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.669 [2024-11-18 23:05:59.009014] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.669 [2024-11-18 23:05:59.009104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.669 [2024-11-18 23:05:59.009415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.669 [2024-11-18 23:05:59.009482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:39.669 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.669 23:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83928 00:10:39.669 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83928 ']' 00:10:39.669 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83928 00:10:39.669 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:39.669 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.669 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83928 00:10:39.932 killing process with pid 83928 00:10:39.932 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.932 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.932 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83928' 00:10:39.932 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 83928 00:10:39.932 [2024-11-18 23:05:59.055527] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.932 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 83928 00:10:39.932 [2024-11-18 23:05:59.095143] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:40.191 00:10:40.191 real 0m9.497s 00:10:40.191 user 0m16.256s 00:10:40.191 sys 0m1.927s 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.191 ************************************ 00:10:40.191 END TEST raid_state_function_test 00:10:40.191 ************************************ 00:10:40.191 23:05:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:40.191 23:05:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:40.191 23:05:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.191 23:05:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.191 ************************************ 00:10:40.191 START TEST raid_state_function_test_sb 00:10:40.191 ************************************ 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84585 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84585' 00:10:40.191 Process raid pid: 84585 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84585 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84585 ']' 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.191 23:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.191 [2024-11-18 23:05:59.509540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:40.191 [2024-11-18 23:05:59.509672] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.451 [2024-11-18 23:05:59.670404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.451 [2024-11-18 23:05:59.715040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.451 [2024-11-18 23:05:59.757383] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.451 [2024-11-18 23:05:59.757413] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.022 [2024-11-18 23:06:00.330973] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.022 [2024-11-18 23:06:00.331070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.022 [2024-11-18 23:06:00.331118] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.022 [2024-11-18 23:06:00.331142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.022 [2024-11-18 23:06:00.331162] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.022 [2024-11-18 23:06:00.331186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.022 [2024-11-18 23:06:00.331203] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.022 [2024-11-18 23:06:00.331231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.022 "name": "Existed_Raid", 00:10:41.022 "uuid": "e0393d89-102d-4517-aa6d-dc87dcf6b086", 00:10:41.022 "strip_size_kb": 0, 00:10:41.022 "state": "configuring", 00:10:41.022 "raid_level": "raid1", 00:10:41.022 "superblock": true, 00:10:41.022 "num_base_bdevs": 4, 00:10:41.022 "num_base_bdevs_discovered": 0, 00:10:41.022 "num_base_bdevs_operational": 4, 00:10:41.022 "base_bdevs_list": [ 00:10:41.022 { 00:10:41.022 "name": "BaseBdev1", 00:10:41.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.022 "is_configured": false, 00:10:41.022 "data_offset": 0, 00:10:41.022 "data_size": 0 00:10:41.022 }, 00:10:41.022 { 00:10:41.022 "name": "BaseBdev2", 00:10:41.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.022 "is_configured": false, 00:10:41.022 "data_offset": 0, 00:10:41.022 "data_size": 0 00:10:41.022 }, 00:10:41.022 { 00:10:41.022 "name": "BaseBdev3", 00:10:41.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.022 "is_configured": false, 00:10:41.022 "data_offset": 0, 00:10:41.022 "data_size": 0 00:10:41.022 }, 00:10:41.022 { 00:10:41.022 "name": "BaseBdev4", 00:10:41.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.022 "is_configured": false, 00:10:41.022 "data_offset": 0, 00:10:41.022 "data_size": 0 00:10:41.022 } 00:10:41.022 ] 00:10:41.022 }' 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.022 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.610 [2024-11-18 23:06:00.754141] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.610 [2024-11-18 23:06:00.754179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.610 [2024-11-18 23:06:00.766164] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.610 [2024-11-18 23:06:00.766240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.610 [2024-11-18 23:06:00.766283] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.610 [2024-11-18 23:06:00.766313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.610 [2024-11-18 23:06:00.766332] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.610 [2024-11-18 23:06:00.766352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.610 [2024-11-18 23:06:00.766369] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.610 [2024-11-18 23:06:00.766389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.610 BaseBdev1 00:10:41.610 [2024-11-18 23:06:00.786934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.610 [ 00:10:41.610 { 00:10:41.610 "name": "BaseBdev1", 00:10:41.610 "aliases": [ 00:10:41.610 "a823a4b6-407f-4205-a233-1edf6e7a92f2" 00:10:41.610 ], 00:10:41.610 "product_name": "Malloc disk", 00:10:41.610 "block_size": 512, 00:10:41.610 "num_blocks": 65536, 00:10:41.610 "uuid": "a823a4b6-407f-4205-a233-1edf6e7a92f2", 00:10:41.610 "assigned_rate_limits": { 00:10:41.610 "rw_ios_per_sec": 0, 00:10:41.610 "rw_mbytes_per_sec": 0, 00:10:41.610 "r_mbytes_per_sec": 0, 00:10:41.610 "w_mbytes_per_sec": 0 00:10:41.610 }, 00:10:41.610 "claimed": true, 00:10:41.610 "claim_type": "exclusive_write", 00:10:41.610 "zoned": false, 00:10:41.610 "supported_io_types": { 00:10:41.610 "read": true, 00:10:41.610 "write": true, 00:10:41.610 "unmap": true, 00:10:41.610 "flush": true, 00:10:41.610 "reset": true, 00:10:41.610 "nvme_admin": false, 00:10:41.610 "nvme_io": false, 00:10:41.610 "nvme_io_md": false, 00:10:41.610 "write_zeroes": true, 00:10:41.610 "zcopy": true, 00:10:41.610 "get_zone_info": false, 00:10:41.610 "zone_management": false, 00:10:41.610 "zone_append": false, 00:10:41.610 "compare": false, 00:10:41.610 "compare_and_write": false, 00:10:41.610 "abort": true, 00:10:41.610 "seek_hole": false, 00:10:41.610 "seek_data": false, 00:10:41.610 "copy": true, 00:10:41.610 "nvme_iov_md": false 00:10:41.610 }, 00:10:41.610 "memory_domains": [ 00:10:41.610 { 00:10:41.610 "dma_device_id": "system", 00:10:41.610 "dma_device_type": 1 00:10:41.610 }, 00:10:41.610 { 00:10:41.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.610 "dma_device_type": 2 00:10:41.610 } 00:10:41.610 ], 00:10:41.610 "driver_specific": {} 00:10:41.610 } 00:10:41.610 ] 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.610 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.611 "name": "Existed_Raid", 00:10:41.611 "uuid": "8b86a7de-e27a-4870-a377-7e616851efc1", 00:10:41.611 "strip_size_kb": 0, 00:10:41.611 "state": "configuring", 00:10:41.611 "raid_level": "raid1", 00:10:41.611 "superblock": true, 00:10:41.611 "num_base_bdevs": 4, 00:10:41.611 "num_base_bdevs_discovered": 1, 00:10:41.611 "num_base_bdevs_operational": 4, 00:10:41.611 "base_bdevs_list": [ 00:10:41.611 { 00:10:41.611 "name": "BaseBdev1", 00:10:41.611 "uuid": "a823a4b6-407f-4205-a233-1edf6e7a92f2", 00:10:41.611 "is_configured": true, 00:10:41.611 "data_offset": 2048, 00:10:41.611 "data_size": 63488 00:10:41.611 }, 00:10:41.611 { 00:10:41.611 "name": "BaseBdev2", 00:10:41.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.611 "is_configured": false, 00:10:41.611 "data_offset": 0, 00:10:41.611 "data_size": 0 00:10:41.611 }, 00:10:41.611 { 00:10:41.611 "name": "BaseBdev3", 00:10:41.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.611 "is_configured": false, 00:10:41.611 "data_offset": 0, 00:10:41.611 "data_size": 0 00:10:41.611 }, 00:10:41.611 { 00:10:41.611 "name": "BaseBdev4", 00:10:41.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.611 "is_configured": false, 00:10:41.611 "data_offset": 0, 00:10:41.611 "data_size": 0 00:10:41.611 } 00:10:41.611 ] 00:10:41.611 }' 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.611 23:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.181 [2024-11-18 23:06:01.274117] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.181 [2024-11-18 23:06:01.274226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.181 [2024-11-18 23:06:01.286139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.181 [2024-11-18 23:06:01.287999] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.181 [2024-11-18 23:06:01.288091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.181 [2024-11-18 23:06:01.288118] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.181 [2024-11-18 23:06:01.288140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.181 [2024-11-18 23:06:01.288158] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:42.181 [2024-11-18 23:06:01.288177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.181 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.181 "name": "Existed_Raid", 00:10:42.181 "uuid": "fa430c9d-7bc1-4009-a8da-50186779f299", 00:10:42.181 "strip_size_kb": 0, 00:10:42.181 "state": "configuring", 00:10:42.181 "raid_level": "raid1", 00:10:42.181 "superblock": true, 00:10:42.181 "num_base_bdevs": 4, 00:10:42.181 "num_base_bdevs_discovered": 1, 00:10:42.181 "num_base_bdevs_operational": 4, 00:10:42.181 "base_bdevs_list": [ 00:10:42.181 { 00:10:42.181 "name": "BaseBdev1", 00:10:42.181 "uuid": "a823a4b6-407f-4205-a233-1edf6e7a92f2", 00:10:42.181 "is_configured": true, 00:10:42.181 "data_offset": 2048, 00:10:42.181 "data_size": 63488 00:10:42.181 }, 00:10:42.181 { 00:10:42.181 "name": "BaseBdev2", 00:10:42.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.181 "is_configured": false, 00:10:42.181 "data_offset": 0, 00:10:42.181 "data_size": 0 00:10:42.181 }, 00:10:42.181 { 00:10:42.181 "name": "BaseBdev3", 00:10:42.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.181 "is_configured": false, 00:10:42.181 "data_offset": 0, 00:10:42.181 "data_size": 0 00:10:42.181 }, 00:10:42.181 { 00:10:42.181 "name": "BaseBdev4", 00:10:42.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.181 "is_configured": false, 00:10:42.181 "data_offset": 0, 00:10:42.181 "data_size": 0 00:10:42.181 } 00:10:42.181 ] 00:10:42.182 }' 00:10:42.182 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.182 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.442 [2024-11-18 23:06:01.720930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.442 BaseBdev2 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.442 [ 00:10:42.442 { 00:10:42.442 "name": "BaseBdev2", 00:10:42.442 "aliases": [ 00:10:42.442 "1986ff19-c688-44c6-894e-efa4852c6d31" 00:10:42.442 ], 00:10:42.442 "product_name": "Malloc disk", 00:10:42.442 "block_size": 512, 00:10:42.442 "num_blocks": 65536, 00:10:42.442 "uuid": "1986ff19-c688-44c6-894e-efa4852c6d31", 00:10:42.442 "assigned_rate_limits": { 00:10:42.442 "rw_ios_per_sec": 0, 00:10:42.442 "rw_mbytes_per_sec": 0, 00:10:42.442 "r_mbytes_per_sec": 0, 00:10:42.442 "w_mbytes_per_sec": 0 00:10:42.442 }, 00:10:42.442 "claimed": true, 00:10:42.442 "claim_type": "exclusive_write", 00:10:42.442 "zoned": false, 00:10:42.442 "supported_io_types": { 00:10:42.442 "read": true, 00:10:42.442 "write": true, 00:10:42.442 "unmap": true, 00:10:42.442 "flush": true, 00:10:42.442 "reset": true, 00:10:42.442 "nvme_admin": false, 00:10:42.442 "nvme_io": false, 00:10:42.442 "nvme_io_md": false, 00:10:42.442 "write_zeroes": true, 00:10:42.442 "zcopy": true, 00:10:42.442 "get_zone_info": false, 00:10:42.442 "zone_management": false, 00:10:42.442 "zone_append": false, 00:10:42.442 "compare": false, 00:10:42.442 "compare_and_write": false, 00:10:42.442 "abort": true, 00:10:42.442 "seek_hole": false, 00:10:42.442 "seek_data": false, 00:10:42.442 "copy": true, 00:10:42.442 "nvme_iov_md": false 00:10:42.442 }, 00:10:42.442 "memory_domains": [ 00:10:42.442 { 00:10:42.442 "dma_device_id": "system", 00:10:42.442 "dma_device_type": 1 00:10:42.442 }, 00:10:42.442 { 00:10:42.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.442 "dma_device_type": 2 00:10:42.442 } 00:10:42.442 ], 00:10:42.442 "driver_specific": {} 00:10:42.442 } 00:10:42.442 ] 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.442 "name": "Existed_Raid", 00:10:42.442 "uuid": "fa430c9d-7bc1-4009-a8da-50186779f299", 00:10:42.442 "strip_size_kb": 0, 00:10:42.442 "state": "configuring", 00:10:42.442 "raid_level": "raid1", 00:10:42.442 "superblock": true, 00:10:42.442 "num_base_bdevs": 4, 00:10:42.442 "num_base_bdevs_discovered": 2, 00:10:42.442 "num_base_bdevs_operational": 4, 00:10:42.442 "base_bdevs_list": [ 00:10:42.442 { 00:10:42.442 "name": "BaseBdev1", 00:10:42.442 "uuid": "a823a4b6-407f-4205-a233-1edf6e7a92f2", 00:10:42.442 "is_configured": true, 00:10:42.442 "data_offset": 2048, 00:10:42.442 "data_size": 63488 00:10:42.442 }, 00:10:42.442 { 00:10:42.442 "name": "BaseBdev2", 00:10:42.442 "uuid": "1986ff19-c688-44c6-894e-efa4852c6d31", 00:10:42.442 "is_configured": true, 00:10:42.442 "data_offset": 2048, 00:10:42.442 "data_size": 63488 00:10:42.442 }, 00:10:42.442 { 00:10:42.442 "name": "BaseBdev3", 00:10:42.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.442 "is_configured": false, 00:10:42.442 "data_offset": 0, 00:10:42.442 "data_size": 0 00:10:42.442 }, 00:10:42.442 { 00:10:42.442 "name": "BaseBdev4", 00:10:42.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.442 "is_configured": false, 00:10:42.442 "data_offset": 0, 00:10:42.442 "data_size": 0 00:10:42.442 } 00:10:42.442 ] 00:10:42.442 }' 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.442 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.011 [2024-11-18 23:06:02.183008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.011 BaseBdev3 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.011 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.011 [ 00:10:43.011 { 00:10:43.011 "name": "BaseBdev3", 00:10:43.011 "aliases": [ 00:10:43.011 "ece17be2-0247-45f3-8e1f-071dd36137de" 00:10:43.011 ], 00:10:43.011 "product_name": "Malloc disk", 00:10:43.011 "block_size": 512, 00:10:43.011 "num_blocks": 65536, 00:10:43.011 "uuid": "ece17be2-0247-45f3-8e1f-071dd36137de", 00:10:43.011 "assigned_rate_limits": { 00:10:43.011 "rw_ios_per_sec": 0, 00:10:43.011 "rw_mbytes_per_sec": 0, 00:10:43.011 "r_mbytes_per_sec": 0, 00:10:43.011 "w_mbytes_per_sec": 0 00:10:43.011 }, 00:10:43.011 "claimed": true, 00:10:43.011 "claim_type": "exclusive_write", 00:10:43.011 "zoned": false, 00:10:43.011 "supported_io_types": { 00:10:43.011 "read": true, 00:10:43.011 "write": true, 00:10:43.011 "unmap": true, 00:10:43.011 "flush": true, 00:10:43.012 "reset": true, 00:10:43.012 "nvme_admin": false, 00:10:43.012 "nvme_io": false, 00:10:43.012 "nvme_io_md": false, 00:10:43.012 "write_zeroes": true, 00:10:43.012 "zcopy": true, 00:10:43.012 "get_zone_info": false, 00:10:43.012 "zone_management": false, 00:10:43.012 "zone_append": false, 00:10:43.012 "compare": false, 00:10:43.012 "compare_and_write": false, 00:10:43.012 "abort": true, 00:10:43.012 "seek_hole": false, 00:10:43.012 "seek_data": false, 00:10:43.012 "copy": true, 00:10:43.012 "nvme_iov_md": false 00:10:43.012 }, 00:10:43.012 "memory_domains": [ 00:10:43.012 { 00:10:43.012 "dma_device_id": "system", 00:10:43.012 "dma_device_type": 1 00:10:43.012 }, 00:10:43.012 { 00:10:43.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.012 "dma_device_type": 2 00:10:43.012 } 00:10:43.012 ], 00:10:43.012 "driver_specific": {} 00:10:43.012 } 00:10:43.012 ] 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.012 "name": "Existed_Raid", 00:10:43.012 "uuid": "fa430c9d-7bc1-4009-a8da-50186779f299", 00:10:43.012 "strip_size_kb": 0, 00:10:43.012 "state": "configuring", 00:10:43.012 "raid_level": "raid1", 00:10:43.012 "superblock": true, 00:10:43.012 "num_base_bdevs": 4, 00:10:43.012 "num_base_bdevs_discovered": 3, 00:10:43.012 "num_base_bdevs_operational": 4, 00:10:43.012 "base_bdevs_list": [ 00:10:43.012 { 00:10:43.012 "name": "BaseBdev1", 00:10:43.012 "uuid": "a823a4b6-407f-4205-a233-1edf6e7a92f2", 00:10:43.012 "is_configured": true, 00:10:43.012 "data_offset": 2048, 00:10:43.012 "data_size": 63488 00:10:43.012 }, 00:10:43.012 { 00:10:43.012 "name": "BaseBdev2", 00:10:43.012 "uuid": "1986ff19-c688-44c6-894e-efa4852c6d31", 00:10:43.012 "is_configured": true, 00:10:43.012 "data_offset": 2048, 00:10:43.012 "data_size": 63488 00:10:43.012 }, 00:10:43.012 { 00:10:43.012 "name": "BaseBdev3", 00:10:43.012 "uuid": "ece17be2-0247-45f3-8e1f-071dd36137de", 00:10:43.012 "is_configured": true, 00:10:43.012 "data_offset": 2048, 00:10:43.012 "data_size": 63488 00:10:43.012 }, 00:10:43.012 { 00:10:43.012 "name": "BaseBdev4", 00:10:43.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.012 "is_configured": false, 00:10:43.012 "data_offset": 0, 00:10:43.012 "data_size": 0 00:10:43.012 } 00:10:43.012 ] 00:10:43.012 }' 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.012 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 [2024-11-18 23:06:02.665187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:43.582 [2024-11-18 23:06:02.665501] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:43.582 [2024-11-18 23:06:02.665555] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:43.582 BaseBdev4 00:10:43.582 [2024-11-18 23:06:02.665857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:43.582 [2024-11-18 23:06:02.666013] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:43.582 [2024-11-18 23:06:02.666072] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:43.582 [2024-11-18 23:06:02.666231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 [ 00:10:43.582 { 00:10:43.582 "name": "BaseBdev4", 00:10:43.582 "aliases": [ 00:10:43.582 "12eea05b-9e68-44e0-9f8e-72bdc701a69d" 00:10:43.582 ], 00:10:43.582 "product_name": "Malloc disk", 00:10:43.582 "block_size": 512, 00:10:43.582 "num_blocks": 65536, 00:10:43.582 "uuid": "12eea05b-9e68-44e0-9f8e-72bdc701a69d", 00:10:43.582 "assigned_rate_limits": { 00:10:43.582 "rw_ios_per_sec": 0, 00:10:43.582 "rw_mbytes_per_sec": 0, 00:10:43.582 "r_mbytes_per_sec": 0, 00:10:43.582 "w_mbytes_per_sec": 0 00:10:43.582 }, 00:10:43.582 "claimed": true, 00:10:43.582 "claim_type": "exclusive_write", 00:10:43.582 "zoned": false, 00:10:43.582 "supported_io_types": { 00:10:43.582 "read": true, 00:10:43.582 "write": true, 00:10:43.582 "unmap": true, 00:10:43.582 "flush": true, 00:10:43.582 "reset": true, 00:10:43.582 "nvme_admin": false, 00:10:43.582 "nvme_io": false, 00:10:43.582 "nvme_io_md": false, 00:10:43.582 "write_zeroes": true, 00:10:43.582 "zcopy": true, 00:10:43.582 "get_zone_info": false, 00:10:43.582 "zone_management": false, 00:10:43.582 "zone_append": false, 00:10:43.582 "compare": false, 00:10:43.582 "compare_and_write": false, 00:10:43.582 "abort": true, 00:10:43.582 "seek_hole": false, 00:10:43.582 "seek_data": false, 00:10:43.582 "copy": true, 00:10:43.582 "nvme_iov_md": false 00:10:43.582 }, 00:10:43.582 "memory_domains": [ 00:10:43.582 { 00:10:43.582 "dma_device_id": "system", 00:10:43.582 "dma_device_type": 1 00:10:43.582 }, 00:10:43.582 { 00:10:43.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.582 "dma_device_type": 2 00:10:43.582 } 00:10:43.582 ], 00:10:43.582 "driver_specific": {} 00:10:43.582 } 00:10:43.582 ] 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.582 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.582 "name": "Existed_Raid", 00:10:43.582 "uuid": "fa430c9d-7bc1-4009-a8da-50186779f299", 00:10:43.582 "strip_size_kb": 0, 00:10:43.582 "state": "online", 00:10:43.582 "raid_level": "raid1", 00:10:43.582 "superblock": true, 00:10:43.582 "num_base_bdevs": 4, 00:10:43.582 "num_base_bdevs_discovered": 4, 00:10:43.582 "num_base_bdevs_operational": 4, 00:10:43.582 "base_bdevs_list": [ 00:10:43.582 { 00:10:43.582 "name": "BaseBdev1", 00:10:43.582 "uuid": "a823a4b6-407f-4205-a233-1edf6e7a92f2", 00:10:43.582 "is_configured": true, 00:10:43.582 "data_offset": 2048, 00:10:43.582 "data_size": 63488 00:10:43.582 }, 00:10:43.582 { 00:10:43.582 "name": "BaseBdev2", 00:10:43.582 "uuid": "1986ff19-c688-44c6-894e-efa4852c6d31", 00:10:43.582 "is_configured": true, 00:10:43.582 "data_offset": 2048, 00:10:43.583 "data_size": 63488 00:10:43.583 }, 00:10:43.583 { 00:10:43.583 "name": "BaseBdev3", 00:10:43.583 "uuid": "ece17be2-0247-45f3-8e1f-071dd36137de", 00:10:43.583 "is_configured": true, 00:10:43.583 "data_offset": 2048, 00:10:43.583 "data_size": 63488 00:10:43.583 }, 00:10:43.583 { 00:10:43.583 "name": "BaseBdev4", 00:10:43.583 "uuid": "12eea05b-9e68-44e0-9f8e-72bdc701a69d", 00:10:43.583 "is_configured": true, 00:10:43.583 "data_offset": 2048, 00:10:43.583 "data_size": 63488 00:10:43.583 } 00:10:43.583 ] 00:10:43.583 }' 00:10:43.583 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.583 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.844 [2024-11-18 23:06:03.136723] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.844 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.844 "name": "Existed_Raid", 00:10:43.844 "aliases": [ 00:10:43.844 "fa430c9d-7bc1-4009-a8da-50186779f299" 00:10:43.844 ], 00:10:43.844 "product_name": "Raid Volume", 00:10:43.844 "block_size": 512, 00:10:43.844 "num_blocks": 63488, 00:10:43.844 "uuid": "fa430c9d-7bc1-4009-a8da-50186779f299", 00:10:43.844 "assigned_rate_limits": { 00:10:43.844 "rw_ios_per_sec": 0, 00:10:43.844 "rw_mbytes_per_sec": 0, 00:10:43.844 "r_mbytes_per_sec": 0, 00:10:43.844 "w_mbytes_per_sec": 0 00:10:43.844 }, 00:10:43.844 "claimed": false, 00:10:43.844 "zoned": false, 00:10:43.844 "supported_io_types": { 00:10:43.844 "read": true, 00:10:43.844 "write": true, 00:10:43.844 "unmap": false, 00:10:43.844 "flush": false, 00:10:43.844 "reset": true, 00:10:43.844 "nvme_admin": false, 00:10:43.844 "nvme_io": false, 00:10:43.844 "nvme_io_md": false, 00:10:43.844 "write_zeroes": true, 00:10:43.844 "zcopy": false, 00:10:43.844 "get_zone_info": false, 00:10:43.844 "zone_management": false, 00:10:43.844 "zone_append": false, 00:10:43.844 "compare": false, 00:10:43.844 "compare_and_write": false, 00:10:43.844 "abort": false, 00:10:43.844 "seek_hole": false, 00:10:43.844 "seek_data": false, 00:10:43.844 "copy": false, 00:10:43.844 "nvme_iov_md": false 00:10:43.844 }, 00:10:43.844 "memory_domains": [ 00:10:43.844 { 00:10:43.844 "dma_device_id": "system", 00:10:43.845 "dma_device_type": 1 00:10:43.845 }, 00:10:43.845 { 00:10:43.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.845 "dma_device_type": 2 00:10:43.845 }, 00:10:43.845 { 00:10:43.845 "dma_device_id": "system", 00:10:43.845 "dma_device_type": 1 00:10:43.845 }, 00:10:43.845 { 00:10:43.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.845 "dma_device_type": 2 00:10:43.845 }, 00:10:43.845 { 00:10:43.845 "dma_device_id": "system", 00:10:43.845 "dma_device_type": 1 00:10:43.845 }, 00:10:43.845 { 00:10:43.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.845 "dma_device_type": 2 00:10:43.845 }, 00:10:43.845 { 00:10:43.845 "dma_device_id": "system", 00:10:43.845 "dma_device_type": 1 00:10:43.845 }, 00:10:43.845 { 00:10:43.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.845 "dma_device_type": 2 00:10:43.845 } 00:10:43.845 ], 00:10:43.845 "driver_specific": { 00:10:43.845 "raid": { 00:10:43.845 "uuid": "fa430c9d-7bc1-4009-a8da-50186779f299", 00:10:43.845 "strip_size_kb": 0, 00:10:43.845 "state": "online", 00:10:43.845 "raid_level": "raid1", 00:10:43.845 "superblock": true, 00:10:43.845 "num_base_bdevs": 4, 00:10:43.845 "num_base_bdevs_discovered": 4, 00:10:43.845 "num_base_bdevs_operational": 4, 00:10:43.845 "base_bdevs_list": [ 00:10:43.845 { 00:10:43.845 "name": "BaseBdev1", 00:10:43.845 "uuid": "a823a4b6-407f-4205-a233-1edf6e7a92f2", 00:10:43.845 "is_configured": true, 00:10:43.845 "data_offset": 2048, 00:10:43.845 "data_size": 63488 00:10:43.845 }, 00:10:43.845 { 00:10:43.845 "name": "BaseBdev2", 00:10:43.845 "uuid": "1986ff19-c688-44c6-894e-efa4852c6d31", 00:10:43.845 "is_configured": true, 00:10:43.845 "data_offset": 2048, 00:10:43.845 "data_size": 63488 00:10:43.845 }, 00:10:43.845 { 00:10:43.845 "name": "BaseBdev3", 00:10:43.845 "uuid": "ece17be2-0247-45f3-8e1f-071dd36137de", 00:10:43.845 "is_configured": true, 00:10:43.845 "data_offset": 2048, 00:10:43.845 "data_size": 63488 00:10:43.845 }, 00:10:43.845 { 00:10:43.845 "name": "BaseBdev4", 00:10:43.845 "uuid": "12eea05b-9e68-44e0-9f8e-72bdc701a69d", 00:10:43.845 "is_configured": true, 00:10:43.845 "data_offset": 2048, 00:10:43.845 "data_size": 63488 00:10:43.845 } 00:10:43.845 ] 00:10:43.845 } 00:10:43.845 } 00:10:43.845 }' 00:10:43.846 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.846 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:43.846 BaseBdev2 00:10:43.846 BaseBdev3 00:10:43.846 BaseBdev4' 00:10:43.846 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.108 [2024-11-18 23:06:03.447932] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.108 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.368 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.368 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.368 "name": "Existed_Raid", 00:10:44.368 "uuid": "fa430c9d-7bc1-4009-a8da-50186779f299", 00:10:44.368 "strip_size_kb": 0, 00:10:44.368 "state": "online", 00:10:44.368 "raid_level": "raid1", 00:10:44.368 "superblock": true, 00:10:44.368 "num_base_bdevs": 4, 00:10:44.368 "num_base_bdevs_discovered": 3, 00:10:44.368 "num_base_bdevs_operational": 3, 00:10:44.368 "base_bdevs_list": [ 00:10:44.368 { 00:10:44.368 "name": null, 00:10:44.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.368 "is_configured": false, 00:10:44.368 "data_offset": 0, 00:10:44.368 "data_size": 63488 00:10:44.368 }, 00:10:44.368 { 00:10:44.368 "name": "BaseBdev2", 00:10:44.368 "uuid": "1986ff19-c688-44c6-894e-efa4852c6d31", 00:10:44.368 "is_configured": true, 00:10:44.368 "data_offset": 2048, 00:10:44.368 "data_size": 63488 00:10:44.368 }, 00:10:44.368 { 00:10:44.368 "name": "BaseBdev3", 00:10:44.368 "uuid": "ece17be2-0247-45f3-8e1f-071dd36137de", 00:10:44.368 "is_configured": true, 00:10:44.368 "data_offset": 2048, 00:10:44.368 "data_size": 63488 00:10:44.368 }, 00:10:44.368 { 00:10:44.368 "name": "BaseBdev4", 00:10:44.368 "uuid": "12eea05b-9e68-44e0-9f8e-72bdc701a69d", 00:10:44.368 "is_configured": true, 00:10:44.368 "data_offset": 2048, 00:10:44.368 "data_size": 63488 00:10:44.368 } 00:10:44.368 ] 00:10:44.368 }' 00:10:44.368 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.368 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.629 [2024-11-18 23:06:03.962284] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.629 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.891 [2024-11-18 23:06:04.033457] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.891 [2024-11-18 23:06:04.100581] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:44.891 [2024-11-18 23:06:04.100716] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.891 [2024-11-18 23:06:04.112321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.891 [2024-11-18 23:06:04.112419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.891 [2024-11-18 23:06:04.112462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.891 BaseBdev2 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.891 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.892 [ 00:10:44.892 { 00:10:44.892 "name": "BaseBdev2", 00:10:44.892 "aliases": [ 00:10:44.892 "fa8e9405-aefb-457f-a959-3af51d7c7020" 00:10:44.892 ], 00:10:44.892 "product_name": "Malloc disk", 00:10:44.892 "block_size": 512, 00:10:44.892 "num_blocks": 65536, 00:10:44.892 "uuid": "fa8e9405-aefb-457f-a959-3af51d7c7020", 00:10:44.892 "assigned_rate_limits": { 00:10:44.892 "rw_ios_per_sec": 0, 00:10:44.892 "rw_mbytes_per_sec": 0, 00:10:44.892 "r_mbytes_per_sec": 0, 00:10:44.892 "w_mbytes_per_sec": 0 00:10:44.892 }, 00:10:44.892 "claimed": false, 00:10:44.892 "zoned": false, 00:10:44.892 "supported_io_types": { 00:10:44.892 "read": true, 00:10:44.892 "write": true, 00:10:44.892 "unmap": true, 00:10:44.892 "flush": true, 00:10:44.892 "reset": true, 00:10:44.892 "nvme_admin": false, 00:10:44.892 "nvme_io": false, 00:10:44.892 "nvme_io_md": false, 00:10:44.892 "write_zeroes": true, 00:10:44.892 "zcopy": true, 00:10:44.892 "get_zone_info": false, 00:10:44.892 "zone_management": false, 00:10:44.892 "zone_append": false, 00:10:44.892 "compare": false, 00:10:44.892 "compare_and_write": false, 00:10:44.892 "abort": true, 00:10:44.892 "seek_hole": false, 00:10:44.892 "seek_data": false, 00:10:44.892 "copy": true, 00:10:44.892 "nvme_iov_md": false 00:10:44.892 }, 00:10:44.892 "memory_domains": [ 00:10:44.892 { 00:10:44.892 "dma_device_id": "system", 00:10:44.892 "dma_device_type": 1 00:10:44.892 }, 00:10:44.892 { 00:10:44.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.892 "dma_device_type": 2 00:10:44.892 } 00:10:44.892 ], 00:10:44.892 "driver_specific": {} 00:10:44.892 } 00:10:44.892 ] 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.892 BaseBdev3 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.892 [ 00:10:44.892 { 00:10:44.892 "name": "BaseBdev3", 00:10:44.892 "aliases": [ 00:10:44.892 "f5af8ac3-57f6-47cb-ab30-5a41405125e6" 00:10:44.892 ], 00:10:44.892 "product_name": "Malloc disk", 00:10:44.892 "block_size": 512, 00:10:44.892 "num_blocks": 65536, 00:10:44.892 "uuid": "f5af8ac3-57f6-47cb-ab30-5a41405125e6", 00:10:44.892 "assigned_rate_limits": { 00:10:44.892 "rw_ios_per_sec": 0, 00:10:44.892 "rw_mbytes_per_sec": 0, 00:10:44.892 "r_mbytes_per_sec": 0, 00:10:44.892 "w_mbytes_per_sec": 0 00:10:44.892 }, 00:10:44.892 "claimed": false, 00:10:44.892 "zoned": false, 00:10:44.892 "supported_io_types": { 00:10:44.892 "read": true, 00:10:44.892 "write": true, 00:10:44.892 "unmap": true, 00:10:44.892 "flush": true, 00:10:44.892 "reset": true, 00:10:44.892 "nvme_admin": false, 00:10:44.892 "nvme_io": false, 00:10:44.892 "nvme_io_md": false, 00:10:44.892 "write_zeroes": true, 00:10:44.892 "zcopy": true, 00:10:44.892 "get_zone_info": false, 00:10:44.892 "zone_management": false, 00:10:44.892 "zone_append": false, 00:10:44.892 "compare": false, 00:10:44.892 "compare_and_write": false, 00:10:44.892 "abort": true, 00:10:44.892 "seek_hole": false, 00:10:44.892 "seek_data": false, 00:10:44.892 "copy": true, 00:10:44.892 "nvme_iov_md": false 00:10:44.892 }, 00:10:44.892 "memory_domains": [ 00:10:44.892 { 00:10:44.892 "dma_device_id": "system", 00:10:44.892 "dma_device_type": 1 00:10:44.892 }, 00:10:44.892 { 00:10:44.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.892 "dma_device_type": 2 00:10:44.892 } 00:10:44.892 ], 00:10:44.892 "driver_specific": {} 00:10:44.892 } 00:10:44.892 ] 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:44.892 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.152 BaseBdev4 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.152 [ 00:10:45.152 { 00:10:45.152 "name": "BaseBdev4", 00:10:45.152 "aliases": [ 00:10:45.152 "9a5bfa64-a1f5-4401-a46d-9ae372259b7b" 00:10:45.152 ], 00:10:45.152 "product_name": "Malloc disk", 00:10:45.152 "block_size": 512, 00:10:45.152 "num_blocks": 65536, 00:10:45.152 "uuid": "9a5bfa64-a1f5-4401-a46d-9ae372259b7b", 00:10:45.152 "assigned_rate_limits": { 00:10:45.152 "rw_ios_per_sec": 0, 00:10:45.152 "rw_mbytes_per_sec": 0, 00:10:45.152 "r_mbytes_per_sec": 0, 00:10:45.152 "w_mbytes_per_sec": 0 00:10:45.152 }, 00:10:45.152 "claimed": false, 00:10:45.152 "zoned": false, 00:10:45.152 "supported_io_types": { 00:10:45.152 "read": true, 00:10:45.152 "write": true, 00:10:45.152 "unmap": true, 00:10:45.152 "flush": true, 00:10:45.152 "reset": true, 00:10:45.152 "nvme_admin": false, 00:10:45.152 "nvme_io": false, 00:10:45.152 "nvme_io_md": false, 00:10:45.152 "write_zeroes": true, 00:10:45.152 "zcopy": true, 00:10:45.152 "get_zone_info": false, 00:10:45.152 "zone_management": false, 00:10:45.152 "zone_append": false, 00:10:45.152 "compare": false, 00:10:45.152 "compare_and_write": false, 00:10:45.152 "abort": true, 00:10:45.152 "seek_hole": false, 00:10:45.152 "seek_data": false, 00:10:45.152 "copy": true, 00:10:45.152 "nvme_iov_md": false 00:10:45.152 }, 00:10:45.152 "memory_domains": [ 00:10:45.152 { 00:10:45.152 "dma_device_id": "system", 00:10:45.152 "dma_device_type": 1 00:10:45.152 }, 00:10:45.152 { 00:10:45.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.152 "dma_device_type": 2 00:10:45.152 } 00:10:45.152 ], 00:10:45.152 "driver_specific": {} 00:10:45.152 } 00:10:45.152 ] 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.152 [2024-11-18 23:06:04.320507] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.152 [2024-11-18 23:06:04.320610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.152 [2024-11-18 23:06:04.320648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.152 [2024-11-18 23:06:04.322434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.152 [2024-11-18 23:06:04.322531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.152 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.153 "name": "Existed_Raid", 00:10:45.153 "uuid": "1aa9fee4-e151-4432-8160-8d95d8c0558b", 00:10:45.153 "strip_size_kb": 0, 00:10:45.153 "state": "configuring", 00:10:45.153 "raid_level": "raid1", 00:10:45.153 "superblock": true, 00:10:45.153 "num_base_bdevs": 4, 00:10:45.153 "num_base_bdevs_discovered": 3, 00:10:45.153 "num_base_bdevs_operational": 4, 00:10:45.153 "base_bdevs_list": [ 00:10:45.153 { 00:10:45.153 "name": "BaseBdev1", 00:10:45.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.153 "is_configured": false, 00:10:45.153 "data_offset": 0, 00:10:45.153 "data_size": 0 00:10:45.153 }, 00:10:45.153 { 00:10:45.153 "name": "BaseBdev2", 00:10:45.153 "uuid": "fa8e9405-aefb-457f-a959-3af51d7c7020", 00:10:45.153 "is_configured": true, 00:10:45.153 "data_offset": 2048, 00:10:45.153 "data_size": 63488 00:10:45.153 }, 00:10:45.153 { 00:10:45.153 "name": "BaseBdev3", 00:10:45.153 "uuid": "f5af8ac3-57f6-47cb-ab30-5a41405125e6", 00:10:45.153 "is_configured": true, 00:10:45.153 "data_offset": 2048, 00:10:45.153 "data_size": 63488 00:10:45.153 }, 00:10:45.153 { 00:10:45.153 "name": "BaseBdev4", 00:10:45.153 "uuid": "9a5bfa64-a1f5-4401-a46d-9ae372259b7b", 00:10:45.153 "is_configured": true, 00:10:45.153 "data_offset": 2048, 00:10:45.153 "data_size": 63488 00:10:45.153 } 00:10:45.153 ] 00:10:45.153 }' 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.153 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.421 [2024-11-18 23:06:04.771715] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.421 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.702 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.702 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.702 "name": "Existed_Raid", 00:10:45.702 "uuid": "1aa9fee4-e151-4432-8160-8d95d8c0558b", 00:10:45.702 "strip_size_kb": 0, 00:10:45.702 "state": "configuring", 00:10:45.702 "raid_level": "raid1", 00:10:45.702 "superblock": true, 00:10:45.702 "num_base_bdevs": 4, 00:10:45.702 "num_base_bdevs_discovered": 2, 00:10:45.702 "num_base_bdevs_operational": 4, 00:10:45.702 "base_bdevs_list": [ 00:10:45.702 { 00:10:45.702 "name": "BaseBdev1", 00:10:45.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.702 "is_configured": false, 00:10:45.702 "data_offset": 0, 00:10:45.702 "data_size": 0 00:10:45.702 }, 00:10:45.702 { 00:10:45.702 "name": null, 00:10:45.702 "uuid": "fa8e9405-aefb-457f-a959-3af51d7c7020", 00:10:45.702 "is_configured": false, 00:10:45.702 "data_offset": 0, 00:10:45.702 "data_size": 63488 00:10:45.702 }, 00:10:45.702 { 00:10:45.702 "name": "BaseBdev3", 00:10:45.702 "uuid": "f5af8ac3-57f6-47cb-ab30-5a41405125e6", 00:10:45.702 "is_configured": true, 00:10:45.702 "data_offset": 2048, 00:10:45.702 "data_size": 63488 00:10:45.702 }, 00:10:45.702 { 00:10:45.702 "name": "BaseBdev4", 00:10:45.702 "uuid": "9a5bfa64-a1f5-4401-a46d-9ae372259b7b", 00:10:45.702 "is_configured": true, 00:10:45.702 "data_offset": 2048, 00:10:45.702 "data_size": 63488 00:10:45.702 } 00:10:45.702 ] 00:10:45.702 }' 00:10:45.702 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.702 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.961 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.961 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.961 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.961 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.961 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.961 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.962 [2024-11-18 23:06:05.301781] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.962 BaseBdev1 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.962 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.962 [ 00:10:45.962 { 00:10:45.962 "name": "BaseBdev1", 00:10:45.962 "aliases": [ 00:10:45.962 "f5f4ddd2-2be0-4dfc-aa04-470656c35e36" 00:10:45.962 ], 00:10:45.962 "product_name": "Malloc disk", 00:10:45.962 "block_size": 512, 00:10:45.962 "num_blocks": 65536, 00:10:45.962 "uuid": "f5f4ddd2-2be0-4dfc-aa04-470656c35e36", 00:10:45.962 "assigned_rate_limits": { 00:10:45.962 "rw_ios_per_sec": 0, 00:10:45.962 "rw_mbytes_per_sec": 0, 00:10:45.962 "r_mbytes_per_sec": 0, 00:10:45.962 "w_mbytes_per_sec": 0 00:10:45.962 }, 00:10:45.962 "claimed": true, 00:10:45.962 "claim_type": "exclusive_write", 00:10:45.962 "zoned": false, 00:10:45.962 "supported_io_types": { 00:10:45.962 "read": true, 00:10:45.962 "write": true, 00:10:45.962 "unmap": true, 00:10:45.962 "flush": true, 00:10:45.962 "reset": true, 00:10:45.962 "nvme_admin": false, 00:10:45.962 "nvme_io": false, 00:10:45.962 "nvme_io_md": false, 00:10:45.962 "write_zeroes": true, 00:10:45.962 "zcopy": true, 00:10:45.962 "get_zone_info": false, 00:10:45.962 "zone_management": false, 00:10:45.962 "zone_append": false, 00:10:45.962 "compare": false, 00:10:45.962 "compare_and_write": false, 00:10:45.962 "abort": true, 00:10:45.962 "seek_hole": false, 00:10:45.962 "seek_data": false, 00:10:45.962 "copy": true, 00:10:45.962 "nvme_iov_md": false 00:10:45.962 }, 00:10:45.962 "memory_domains": [ 00:10:45.962 { 00:10:45.962 "dma_device_id": "system", 00:10:45.962 "dma_device_type": 1 00:10:45.962 }, 00:10:45.962 { 00:10:45.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.962 "dma_device_type": 2 00:10:45.962 } 00:10:45.962 ], 00:10:45.962 "driver_specific": {} 00:10:45.962 } 00:10:45.962 ] 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.221 "name": "Existed_Raid", 00:10:46.221 "uuid": "1aa9fee4-e151-4432-8160-8d95d8c0558b", 00:10:46.221 "strip_size_kb": 0, 00:10:46.221 "state": "configuring", 00:10:46.221 "raid_level": "raid1", 00:10:46.221 "superblock": true, 00:10:46.221 "num_base_bdevs": 4, 00:10:46.221 "num_base_bdevs_discovered": 3, 00:10:46.221 "num_base_bdevs_operational": 4, 00:10:46.221 "base_bdevs_list": [ 00:10:46.221 { 00:10:46.221 "name": "BaseBdev1", 00:10:46.221 "uuid": "f5f4ddd2-2be0-4dfc-aa04-470656c35e36", 00:10:46.221 "is_configured": true, 00:10:46.221 "data_offset": 2048, 00:10:46.221 "data_size": 63488 00:10:46.221 }, 00:10:46.221 { 00:10:46.221 "name": null, 00:10:46.221 "uuid": "fa8e9405-aefb-457f-a959-3af51d7c7020", 00:10:46.221 "is_configured": false, 00:10:46.221 "data_offset": 0, 00:10:46.221 "data_size": 63488 00:10:46.221 }, 00:10:46.221 { 00:10:46.221 "name": "BaseBdev3", 00:10:46.221 "uuid": "f5af8ac3-57f6-47cb-ab30-5a41405125e6", 00:10:46.221 "is_configured": true, 00:10:46.221 "data_offset": 2048, 00:10:46.221 "data_size": 63488 00:10:46.221 }, 00:10:46.221 { 00:10:46.221 "name": "BaseBdev4", 00:10:46.221 "uuid": "9a5bfa64-a1f5-4401-a46d-9ae372259b7b", 00:10:46.221 "is_configured": true, 00:10:46.221 "data_offset": 2048, 00:10:46.221 "data_size": 63488 00:10:46.221 } 00:10:46.221 ] 00:10:46.221 }' 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.221 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.481 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.481 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:46.481 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.481 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.481 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.481 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:46.481 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:46.481 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.481 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.481 [2024-11-18 23:06:05.832925] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.481 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.482 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.742 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.742 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.742 "name": "Existed_Raid", 00:10:46.742 "uuid": "1aa9fee4-e151-4432-8160-8d95d8c0558b", 00:10:46.742 "strip_size_kb": 0, 00:10:46.742 "state": "configuring", 00:10:46.742 "raid_level": "raid1", 00:10:46.742 "superblock": true, 00:10:46.742 "num_base_bdevs": 4, 00:10:46.742 "num_base_bdevs_discovered": 2, 00:10:46.742 "num_base_bdevs_operational": 4, 00:10:46.742 "base_bdevs_list": [ 00:10:46.742 { 00:10:46.742 "name": "BaseBdev1", 00:10:46.742 "uuid": "f5f4ddd2-2be0-4dfc-aa04-470656c35e36", 00:10:46.742 "is_configured": true, 00:10:46.742 "data_offset": 2048, 00:10:46.742 "data_size": 63488 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "name": null, 00:10:46.742 "uuid": "fa8e9405-aefb-457f-a959-3af51d7c7020", 00:10:46.742 "is_configured": false, 00:10:46.742 "data_offset": 0, 00:10:46.742 "data_size": 63488 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "name": null, 00:10:46.742 "uuid": "f5af8ac3-57f6-47cb-ab30-5a41405125e6", 00:10:46.742 "is_configured": false, 00:10:46.742 "data_offset": 0, 00:10:46.742 "data_size": 63488 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "name": "BaseBdev4", 00:10:46.742 "uuid": "9a5bfa64-a1f5-4401-a46d-9ae372259b7b", 00:10:46.742 "is_configured": true, 00:10:46.742 "data_offset": 2048, 00:10:46.742 "data_size": 63488 00:10:46.742 } 00:10:46.742 ] 00:10:46.742 }' 00:10:46.742 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.742 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.002 [2024-11-18 23:06:06.288213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.002 "name": "Existed_Raid", 00:10:47.002 "uuid": "1aa9fee4-e151-4432-8160-8d95d8c0558b", 00:10:47.002 "strip_size_kb": 0, 00:10:47.002 "state": "configuring", 00:10:47.002 "raid_level": "raid1", 00:10:47.002 "superblock": true, 00:10:47.002 "num_base_bdevs": 4, 00:10:47.002 "num_base_bdevs_discovered": 3, 00:10:47.002 "num_base_bdevs_operational": 4, 00:10:47.002 "base_bdevs_list": [ 00:10:47.002 { 00:10:47.002 "name": "BaseBdev1", 00:10:47.002 "uuid": "f5f4ddd2-2be0-4dfc-aa04-470656c35e36", 00:10:47.002 "is_configured": true, 00:10:47.002 "data_offset": 2048, 00:10:47.002 "data_size": 63488 00:10:47.002 }, 00:10:47.002 { 00:10:47.002 "name": null, 00:10:47.002 "uuid": "fa8e9405-aefb-457f-a959-3af51d7c7020", 00:10:47.002 "is_configured": false, 00:10:47.002 "data_offset": 0, 00:10:47.002 "data_size": 63488 00:10:47.002 }, 00:10:47.002 { 00:10:47.002 "name": "BaseBdev3", 00:10:47.002 "uuid": "f5af8ac3-57f6-47cb-ab30-5a41405125e6", 00:10:47.002 "is_configured": true, 00:10:47.002 "data_offset": 2048, 00:10:47.002 "data_size": 63488 00:10:47.002 }, 00:10:47.002 { 00:10:47.002 "name": "BaseBdev4", 00:10:47.002 "uuid": "9a5bfa64-a1f5-4401-a46d-9ae372259b7b", 00:10:47.002 "is_configured": true, 00:10:47.002 "data_offset": 2048, 00:10:47.002 "data_size": 63488 00:10:47.002 } 00:10:47.002 ] 00:10:47.002 }' 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.002 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.571 [2024-11-18 23:06:06.731455] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.571 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.572 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.572 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.572 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.572 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.572 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.572 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.572 "name": "Existed_Raid", 00:10:47.572 "uuid": "1aa9fee4-e151-4432-8160-8d95d8c0558b", 00:10:47.572 "strip_size_kb": 0, 00:10:47.572 "state": "configuring", 00:10:47.572 "raid_level": "raid1", 00:10:47.572 "superblock": true, 00:10:47.572 "num_base_bdevs": 4, 00:10:47.572 "num_base_bdevs_discovered": 2, 00:10:47.572 "num_base_bdevs_operational": 4, 00:10:47.572 "base_bdevs_list": [ 00:10:47.572 { 00:10:47.572 "name": null, 00:10:47.572 "uuid": "f5f4ddd2-2be0-4dfc-aa04-470656c35e36", 00:10:47.572 "is_configured": false, 00:10:47.572 "data_offset": 0, 00:10:47.572 "data_size": 63488 00:10:47.572 }, 00:10:47.572 { 00:10:47.572 "name": null, 00:10:47.572 "uuid": "fa8e9405-aefb-457f-a959-3af51d7c7020", 00:10:47.572 "is_configured": false, 00:10:47.572 "data_offset": 0, 00:10:47.572 "data_size": 63488 00:10:47.572 }, 00:10:47.572 { 00:10:47.572 "name": "BaseBdev3", 00:10:47.572 "uuid": "f5af8ac3-57f6-47cb-ab30-5a41405125e6", 00:10:47.572 "is_configured": true, 00:10:47.572 "data_offset": 2048, 00:10:47.572 "data_size": 63488 00:10:47.572 }, 00:10:47.572 { 00:10:47.572 "name": "BaseBdev4", 00:10:47.572 "uuid": "9a5bfa64-a1f5-4401-a46d-9ae372259b7b", 00:10:47.572 "is_configured": true, 00:10:47.572 "data_offset": 2048, 00:10:47.572 "data_size": 63488 00:10:47.572 } 00:10:47.572 ] 00:10:47.572 }' 00:10:47.572 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.572 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.831 [2024-11-18 23:06:07.141107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.831 "name": "Existed_Raid", 00:10:47.831 "uuid": "1aa9fee4-e151-4432-8160-8d95d8c0558b", 00:10:47.831 "strip_size_kb": 0, 00:10:47.831 "state": "configuring", 00:10:47.831 "raid_level": "raid1", 00:10:47.831 "superblock": true, 00:10:47.831 "num_base_bdevs": 4, 00:10:47.831 "num_base_bdevs_discovered": 3, 00:10:47.831 "num_base_bdevs_operational": 4, 00:10:47.831 "base_bdevs_list": [ 00:10:47.831 { 00:10:47.831 "name": null, 00:10:47.831 "uuid": "f5f4ddd2-2be0-4dfc-aa04-470656c35e36", 00:10:47.831 "is_configured": false, 00:10:47.831 "data_offset": 0, 00:10:47.831 "data_size": 63488 00:10:47.831 }, 00:10:47.831 { 00:10:47.831 "name": "BaseBdev2", 00:10:47.831 "uuid": "fa8e9405-aefb-457f-a959-3af51d7c7020", 00:10:47.831 "is_configured": true, 00:10:47.831 "data_offset": 2048, 00:10:47.831 "data_size": 63488 00:10:47.831 }, 00:10:47.831 { 00:10:47.831 "name": "BaseBdev3", 00:10:47.831 "uuid": "f5af8ac3-57f6-47cb-ab30-5a41405125e6", 00:10:47.831 "is_configured": true, 00:10:47.831 "data_offset": 2048, 00:10:47.831 "data_size": 63488 00:10:47.831 }, 00:10:47.831 { 00:10:47.831 "name": "BaseBdev4", 00:10:47.831 "uuid": "9a5bfa64-a1f5-4401-a46d-9ae372259b7b", 00:10:47.831 "is_configured": true, 00:10:47.831 "data_offset": 2048, 00:10:47.831 "data_size": 63488 00:10:47.831 } 00:10:47.831 ] 00:10:47.831 }' 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.831 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f5f4ddd2-2be0-4dfc-aa04-470656c35e36 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.402 [2024-11-18 23:06:07.635098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:48.402 [2024-11-18 23:06:07.635417] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:48.402 [2024-11-18 23:06:07.635474] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:48.402 [2024-11-18 23:06:07.635756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:48.402 NewBaseBdev 00:10:48.402 [2024-11-18 23:06:07.635937] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:48.402 [2024-11-18 23:06:07.635979] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:48.402 [2024-11-18 23:06:07.636116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.402 [ 00:10:48.402 { 00:10:48.402 "name": "NewBaseBdev", 00:10:48.402 "aliases": [ 00:10:48.402 "f5f4ddd2-2be0-4dfc-aa04-470656c35e36" 00:10:48.402 ], 00:10:48.402 "product_name": "Malloc disk", 00:10:48.402 "block_size": 512, 00:10:48.402 "num_blocks": 65536, 00:10:48.402 "uuid": "f5f4ddd2-2be0-4dfc-aa04-470656c35e36", 00:10:48.402 "assigned_rate_limits": { 00:10:48.402 "rw_ios_per_sec": 0, 00:10:48.402 "rw_mbytes_per_sec": 0, 00:10:48.402 "r_mbytes_per_sec": 0, 00:10:48.402 "w_mbytes_per_sec": 0 00:10:48.402 }, 00:10:48.402 "claimed": true, 00:10:48.402 "claim_type": "exclusive_write", 00:10:48.402 "zoned": false, 00:10:48.402 "supported_io_types": { 00:10:48.402 "read": true, 00:10:48.402 "write": true, 00:10:48.402 "unmap": true, 00:10:48.402 "flush": true, 00:10:48.402 "reset": true, 00:10:48.402 "nvme_admin": false, 00:10:48.402 "nvme_io": false, 00:10:48.402 "nvme_io_md": false, 00:10:48.402 "write_zeroes": true, 00:10:48.402 "zcopy": true, 00:10:48.402 "get_zone_info": false, 00:10:48.402 "zone_management": false, 00:10:48.402 "zone_append": false, 00:10:48.402 "compare": false, 00:10:48.402 "compare_and_write": false, 00:10:48.402 "abort": true, 00:10:48.402 "seek_hole": false, 00:10:48.402 "seek_data": false, 00:10:48.402 "copy": true, 00:10:48.402 "nvme_iov_md": false 00:10:48.402 }, 00:10:48.402 "memory_domains": [ 00:10:48.402 { 00:10:48.402 "dma_device_id": "system", 00:10:48.402 "dma_device_type": 1 00:10:48.402 }, 00:10:48.402 { 00:10:48.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.402 "dma_device_type": 2 00:10:48.402 } 00:10:48.402 ], 00:10:48.402 "driver_specific": {} 00:10:48.402 } 00:10:48.402 ] 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.402 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.402 "name": "Existed_Raid", 00:10:48.402 "uuid": "1aa9fee4-e151-4432-8160-8d95d8c0558b", 00:10:48.402 "strip_size_kb": 0, 00:10:48.402 "state": "online", 00:10:48.402 "raid_level": "raid1", 00:10:48.402 "superblock": true, 00:10:48.402 "num_base_bdevs": 4, 00:10:48.402 "num_base_bdevs_discovered": 4, 00:10:48.403 "num_base_bdevs_operational": 4, 00:10:48.403 "base_bdevs_list": [ 00:10:48.403 { 00:10:48.403 "name": "NewBaseBdev", 00:10:48.403 "uuid": "f5f4ddd2-2be0-4dfc-aa04-470656c35e36", 00:10:48.403 "is_configured": true, 00:10:48.403 "data_offset": 2048, 00:10:48.403 "data_size": 63488 00:10:48.403 }, 00:10:48.403 { 00:10:48.403 "name": "BaseBdev2", 00:10:48.403 "uuid": "fa8e9405-aefb-457f-a959-3af51d7c7020", 00:10:48.403 "is_configured": true, 00:10:48.403 "data_offset": 2048, 00:10:48.403 "data_size": 63488 00:10:48.403 }, 00:10:48.403 { 00:10:48.403 "name": "BaseBdev3", 00:10:48.403 "uuid": "f5af8ac3-57f6-47cb-ab30-5a41405125e6", 00:10:48.403 "is_configured": true, 00:10:48.403 "data_offset": 2048, 00:10:48.403 "data_size": 63488 00:10:48.403 }, 00:10:48.403 { 00:10:48.403 "name": "BaseBdev4", 00:10:48.403 "uuid": "9a5bfa64-a1f5-4401-a46d-9ae372259b7b", 00:10:48.403 "is_configured": true, 00:10:48.403 "data_offset": 2048, 00:10:48.403 "data_size": 63488 00:10:48.403 } 00:10:48.403 ] 00:10:48.403 }' 00:10:48.403 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.403 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.661 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:48.661 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:48.661 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.661 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.661 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.661 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.661 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.661 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:48.661 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.661 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.661 [2024-11-18 23:06:08.026739] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.921 "name": "Existed_Raid", 00:10:48.921 "aliases": [ 00:10:48.921 "1aa9fee4-e151-4432-8160-8d95d8c0558b" 00:10:48.921 ], 00:10:48.921 "product_name": "Raid Volume", 00:10:48.921 "block_size": 512, 00:10:48.921 "num_blocks": 63488, 00:10:48.921 "uuid": "1aa9fee4-e151-4432-8160-8d95d8c0558b", 00:10:48.921 "assigned_rate_limits": { 00:10:48.921 "rw_ios_per_sec": 0, 00:10:48.921 "rw_mbytes_per_sec": 0, 00:10:48.921 "r_mbytes_per_sec": 0, 00:10:48.921 "w_mbytes_per_sec": 0 00:10:48.921 }, 00:10:48.921 "claimed": false, 00:10:48.921 "zoned": false, 00:10:48.921 "supported_io_types": { 00:10:48.921 "read": true, 00:10:48.921 "write": true, 00:10:48.921 "unmap": false, 00:10:48.921 "flush": false, 00:10:48.921 "reset": true, 00:10:48.921 "nvme_admin": false, 00:10:48.921 "nvme_io": false, 00:10:48.921 "nvme_io_md": false, 00:10:48.921 "write_zeroes": true, 00:10:48.921 "zcopy": false, 00:10:48.921 "get_zone_info": false, 00:10:48.921 "zone_management": false, 00:10:48.921 "zone_append": false, 00:10:48.921 "compare": false, 00:10:48.921 "compare_and_write": false, 00:10:48.921 "abort": false, 00:10:48.921 "seek_hole": false, 00:10:48.921 "seek_data": false, 00:10:48.921 "copy": false, 00:10:48.921 "nvme_iov_md": false 00:10:48.921 }, 00:10:48.921 "memory_domains": [ 00:10:48.921 { 00:10:48.921 "dma_device_id": "system", 00:10:48.921 "dma_device_type": 1 00:10:48.921 }, 00:10:48.921 { 00:10:48.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.921 "dma_device_type": 2 00:10:48.921 }, 00:10:48.921 { 00:10:48.921 "dma_device_id": "system", 00:10:48.921 "dma_device_type": 1 00:10:48.921 }, 00:10:48.921 { 00:10:48.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.921 "dma_device_type": 2 00:10:48.921 }, 00:10:48.921 { 00:10:48.921 "dma_device_id": "system", 00:10:48.921 "dma_device_type": 1 00:10:48.921 }, 00:10:48.921 { 00:10:48.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.921 "dma_device_type": 2 00:10:48.921 }, 00:10:48.921 { 00:10:48.921 "dma_device_id": "system", 00:10:48.921 "dma_device_type": 1 00:10:48.921 }, 00:10:48.921 { 00:10:48.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.921 "dma_device_type": 2 00:10:48.921 } 00:10:48.921 ], 00:10:48.921 "driver_specific": { 00:10:48.921 "raid": { 00:10:48.921 "uuid": "1aa9fee4-e151-4432-8160-8d95d8c0558b", 00:10:48.921 "strip_size_kb": 0, 00:10:48.921 "state": "online", 00:10:48.921 "raid_level": "raid1", 00:10:48.921 "superblock": true, 00:10:48.921 "num_base_bdevs": 4, 00:10:48.921 "num_base_bdevs_discovered": 4, 00:10:48.921 "num_base_bdevs_operational": 4, 00:10:48.921 "base_bdevs_list": [ 00:10:48.921 { 00:10:48.921 "name": "NewBaseBdev", 00:10:48.921 "uuid": "f5f4ddd2-2be0-4dfc-aa04-470656c35e36", 00:10:48.921 "is_configured": true, 00:10:48.921 "data_offset": 2048, 00:10:48.921 "data_size": 63488 00:10:48.921 }, 00:10:48.921 { 00:10:48.921 "name": "BaseBdev2", 00:10:48.921 "uuid": "fa8e9405-aefb-457f-a959-3af51d7c7020", 00:10:48.921 "is_configured": true, 00:10:48.921 "data_offset": 2048, 00:10:48.921 "data_size": 63488 00:10:48.921 }, 00:10:48.921 { 00:10:48.921 "name": "BaseBdev3", 00:10:48.921 "uuid": "f5af8ac3-57f6-47cb-ab30-5a41405125e6", 00:10:48.921 "is_configured": true, 00:10:48.921 "data_offset": 2048, 00:10:48.921 "data_size": 63488 00:10:48.921 }, 00:10:48.921 { 00:10:48.921 "name": "BaseBdev4", 00:10:48.921 "uuid": "9a5bfa64-a1f5-4401-a46d-9ae372259b7b", 00:10:48.921 "is_configured": true, 00:10:48.921 "data_offset": 2048, 00:10:48.921 "data_size": 63488 00:10:48.921 } 00:10:48.921 ] 00:10:48.921 } 00:10:48.921 } 00:10:48.921 }' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:48.921 BaseBdev2 00:10:48.921 BaseBdev3 00:10:48.921 BaseBdev4' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.921 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.182 [2024-11-18 23:06:08.309973] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.182 [2024-11-18 23:06:08.310035] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.182 [2024-11-18 23:06:08.310128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.182 [2024-11-18 23:06:08.310418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.182 [2024-11-18 23:06:08.310480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84585 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84585 ']' 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84585 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84585 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84585' 00:10:49.182 killing process with pid 84585 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84585 00:10:49.182 [2024-11-18 23:06:08.357341] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.182 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84585 00:10:49.182 [2024-11-18 23:06:08.397178] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:49.445 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:49.445 00:10:49.445 real 0m9.230s 00:10:49.445 user 0m15.737s 00:10:49.445 sys 0m1.914s 00:10:49.445 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.445 23:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.445 ************************************ 00:10:49.445 END TEST raid_state_function_test_sb 00:10:49.445 ************************************ 00:10:49.445 23:06:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:49.445 23:06:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:49.445 23:06:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.445 23:06:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:49.445 ************************************ 00:10:49.445 START TEST raid_superblock_test 00:10:49.445 ************************************ 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:49.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85233 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85233 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85233 ']' 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.445 23:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.445 [2024-11-18 23:06:08.797626] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:49.445 [2024-11-18 23:06:08.797736] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85233 ] 00:10:49.708 [2024-11-18 23:06:08.954339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.708 [2024-11-18 23:06:08.998068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.708 [2024-11-18 23:06:09.039943] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.708 [2024-11-18 23:06:09.039981] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.277 malloc1 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.277 [2024-11-18 23:06:09.646033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:50.277 [2024-11-18 23:06:09.646165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.277 [2024-11-18 23:06:09.646211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:50.277 [2024-11-18 23:06:09.646274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.277 [2024-11-18 23:06:09.648449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.277 [2024-11-18 23:06:09.648544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:50.277 pt1 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:50.277 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.538 malloc2 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.538 [2024-11-18 23:06:09.694711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:50.538 [2024-11-18 23:06:09.694968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.538 [2024-11-18 23:06:09.695028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:50.538 [2024-11-18 23:06:09.695061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.538 [2024-11-18 23:06:09.699685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.538 [2024-11-18 23:06:09.699752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:50.538 pt2 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.538 malloc3 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.538 [2024-11-18 23:06:09.725431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:50.538 [2024-11-18 23:06:09.725521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.538 [2024-11-18 23:06:09.725556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:50.538 [2024-11-18 23:06:09.725587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.538 [2024-11-18 23:06:09.727669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.538 [2024-11-18 23:06:09.727742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:50.538 pt3 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.538 malloc4 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.538 [2024-11-18 23:06:09.757805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:50.538 [2024-11-18 23:06:09.757906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.538 [2024-11-18 23:06:09.757937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:50.538 [2024-11-18 23:06:09.757967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.538 [2024-11-18 23:06:09.759979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.538 [2024-11-18 23:06:09.760049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:50.538 pt4 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.538 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.538 [2024-11-18 23:06:09.769834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:50.538 [2024-11-18 23:06:09.771639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.538 [2024-11-18 23:06:09.771750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:50.538 [2024-11-18 23:06:09.771809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:50.539 [2024-11-18 23:06:09.771990] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:50.539 [2024-11-18 23:06:09.772041] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:50.539 [2024-11-18 23:06:09.772336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:50.539 [2024-11-18 23:06:09.772528] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:50.539 [2024-11-18 23:06:09.772570] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:50.539 [2024-11-18 23:06:09.772723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.539 "name": "raid_bdev1", 00:10:50.539 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:50.539 "strip_size_kb": 0, 00:10:50.539 "state": "online", 00:10:50.539 "raid_level": "raid1", 00:10:50.539 "superblock": true, 00:10:50.539 "num_base_bdevs": 4, 00:10:50.539 "num_base_bdevs_discovered": 4, 00:10:50.539 "num_base_bdevs_operational": 4, 00:10:50.539 "base_bdevs_list": [ 00:10:50.539 { 00:10:50.539 "name": "pt1", 00:10:50.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.539 "is_configured": true, 00:10:50.539 "data_offset": 2048, 00:10:50.539 "data_size": 63488 00:10:50.539 }, 00:10:50.539 { 00:10:50.539 "name": "pt2", 00:10:50.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.539 "is_configured": true, 00:10:50.539 "data_offset": 2048, 00:10:50.539 "data_size": 63488 00:10:50.539 }, 00:10:50.539 { 00:10:50.539 "name": "pt3", 00:10:50.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.539 "is_configured": true, 00:10:50.539 "data_offset": 2048, 00:10:50.539 "data_size": 63488 00:10:50.539 }, 00:10:50.539 { 00:10:50.539 "name": "pt4", 00:10:50.539 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.539 "is_configured": true, 00:10:50.539 "data_offset": 2048, 00:10:50.539 "data_size": 63488 00:10:50.539 } 00:10:50.539 ] 00:10:50.539 }' 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.539 23:06:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.109 [2024-11-18 23:06:10.201373] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.109 "name": "raid_bdev1", 00:10:51.109 "aliases": [ 00:10:51.109 "c43c841b-efe9-41ab-9ea8-19c440662836" 00:10:51.109 ], 00:10:51.109 "product_name": "Raid Volume", 00:10:51.109 "block_size": 512, 00:10:51.109 "num_blocks": 63488, 00:10:51.109 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:51.109 "assigned_rate_limits": { 00:10:51.109 "rw_ios_per_sec": 0, 00:10:51.109 "rw_mbytes_per_sec": 0, 00:10:51.109 "r_mbytes_per_sec": 0, 00:10:51.109 "w_mbytes_per_sec": 0 00:10:51.109 }, 00:10:51.109 "claimed": false, 00:10:51.109 "zoned": false, 00:10:51.109 "supported_io_types": { 00:10:51.109 "read": true, 00:10:51.109 "write": true, 00:10:51.109 "unmap": false, 00:10:51.109 "flush": false, 00:10:51.109 "reset": true, 00:10:51.109 "nvme_admin": false, 00:10:51.109 "nvme_io": false, 00:10:51.109 "nvme_io_md": false, 00:10:51.109 "write_zeroes": true, 00:10:51.109 "zcopy": false, 00:10:51.109 "get_zone_info": false, 00:10:51.109 "zone_management": false, 00:10:51.109 "zone_append": false, 00:10:51.109 "compare": false, 00:10:51.109 "compare_and_write": false, 00:10:51.109 "abort": false, 00:10:51.109 "seek_hole": false, 00:10:51.109 "seek_data": false, 00:10:51.109 "copy": false, 00:10:51.109 "nvme_iov_md": false 00:10:51.109 }, 00:10:51.109 "memory_domains": [ 00:10:51.109 { 00:10:51.109 "dma_device_id": "system", 00:10:51.109 "dma_device_type": 1 00:10:51.109 }, 00:10:51.109 { 00:10:51.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.109 "dma_device_type": 2 00:10:51.109 }, 00:10:51.109 { 00:10:51.109 "dma_device_id": "system", 00:10:51.109 "dma_device_type": 1 00:10:51.109 }, 00:10:51.109 { 00:10:51.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.109 "dma_device_type": 2 00:10:51.109 }, 00:10:51.109 { 00:10:51.109 "dma_device_id": "system", 00:10:51.109 "dma_device_type": 1 00:10:51.109 }, 00:10:51.109 { 00:10:51.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.109 "dma_device_type": 2 00:10:51.109 }, 00:10:51.109 { 00:10:51.109 "dma_device_id": "system", 00:10:51.109 "dma_device_type": 1 00:10:51.109 }, 00:10:51.109 { 00:10:51.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.109 "dma_device_type": 2 00:10:51.109 } 00:10:51.109 ], 00:10:51.109 "driver_specific": { 00:10:51.109 "raid": { 00:10:51.109 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:51.109 "strip_size_kb": 0, 00:10:51.109 "state": "online", 00:10:51.109 "raid_level": "raid1", 00:10:51.109 "superblock": true, 00:10:51.109 "num_base_bdevs": 4, 00:10:51.109 "num_base_bdevs_discovered": 4, 00:10:51.109 "num_base_bdevs_operational": 4, 00:10:51.109 "base_bdevs_list": [ 00:10:51.109 { 00:10:51.109 "name": "pt1", 00:10:51.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.109 "is_configured": true, 00:10:51.109 "data_offset": 2048, 00:10:51.109 "data_size": 63488 00:10:51.109 }, 00:10:51.109 { 00:10:51.109 "name": "pt2", 00:10:51.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.109 "is_configured": true, 00:10:51.109 "data_offset": 2048, 00:10:51.109 "data_size": 63488 00:10:51.109 }, 00:10:51.109 { 00:10:51.109 "name": "pt3", 00:10:51.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.109 "is_configured": true, 00:10:51.109 "data_offset": 2048, 00:10:51.109 "data_size": 63488 00:10:51.109 }, 00:10:51.109 { 00:10:51.109 "name": "pt4", 00:10:51.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.109 "is_configured": true, 00:10:51.109 "data_offset": 2048, 00:10:51.109 "data_size": 63488 00:10:51.109 } 00:10:51.109 ] 00:10:51.109 } 00:10:51.109 } 00:10:51.109 }' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:51.109 pt2 00:10:51.109 pt3 00:10:51.109 pt4' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.109 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.370 [2024-11-18 23:06:10.488852] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c43c841b-efe9-41ab-9ea8-19c440662836 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c43c841b-efe9-41ab-9ea8-19c440662836 ']' 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.370 [2024-11-18 23:06:10.532480] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.370 [2024-11-18 23:06:10.532543] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.370 [2024-11-18 23:06:10.532641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.370 [2024-11-18 23:06:10.532743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.370 [2024-11-18 23:06:10.532820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.370 [2024-11-18 23:06:10.700254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:51.370 [2024-11-18 23:06:10.702173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:51.370 [2024-11-18 23:06:10.702267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:51.370 [2024-11-18 23:06:10.702328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:51.370 [2024-11-18 23:06:10.702420] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:51.370 [2024-11-18 23:06:10.702513] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:51.370 [2024-11-18 23:06:10.702606] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:51.370 [2024-11-18 23:06:10.702660] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:51.370 [2024-11-18 23:06:10.702709] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.370 [2024-11-18 23:06:10.702741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:51.370 request: 00:10:51.370 { 00:10:51.370 "name": "raid_bdev1", 00:10:51.370 "raid_level": "raid1", 00:10:51.370 "base_bdevs": [ 00:10:51.370 "malloc1", 00:10:51.370 "malloc2", 00:10:51.370 "malloc3", 00:10:51.370 "malloc4" 00:10:51.370 ], 00:10:51.370 "superblock": false, 00:10:51.370 "method": "bdev_raid_create", 00:10:51.370 "req_id": 1 00:10:51.370 } 00:10:51.370 Got JSON-RPC error response 00:10:51.370 response: 00:10:51.370 { 00:10:51.370 "code": -17, 00:10:51.370 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:51.370 } 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.370 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.631 [2024-11-18 23:06:10.748123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:51.631 [2024-11-18 23:06:10.748200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.631 [2024-11-18 23:06:10.748251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:51.631 [2024-11-18 23:06:10.748297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.631 [2024-11-18 23:06:10.750411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.631 [2024-11-18 23:06:10.750475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:51.631 [2024-11-18 23:06:10.750563] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:51.631 [2024-11-18 23:06:10.750621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:51.631 pt1 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.631 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.631 "name": "raid_bdev1", 00:10:51.631 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:51.631 "strip_size_kb": 0, 00:10:51.631 "state": "configuring", 00:10:51.631 "raid_level": "raid1", 00:10:51.631 "superblock": true, 00:10:51.631 "num_base_bdevs": 4, 00:10:51.631 "num_base_bdevs_discovered": 1, 00:10:51.631 "num_base_bdevs_operational": 4, 00:10:51.631 "base_bdevs_list": [ 00:10:51.631 { 00:10:51.631 "name": "pt1", 00:10:51.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.631 "is_configured": true, 00:10:51.631 "data_offset": 2048, 00:10:51.631 "data_size": 63488 00:10:51.631 }, 00:10:51.631 { 00:10:51.631 "name": null, 00:10:51.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.632 "is_configured": false, 00:10:51.632 "data_offset": 2048, 00:10:51.632 "data_size": 63488 00:10:51.632 }, 00:10:51.632 { 00:10:51.632 "name": null, 00:10:51.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.632 "is_configured": false, 00:10:51.632 "data_offset": 2048, 00:10:51.632 "data_size": 63488 00:10:51.632 }, 00:10:51.632 { 00:10:51.632 "name": null, 00:10:51.632 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.632 "is_configured": false, 00:10:51.632 "data_offset": 2048, 00:10:51.632 "data_size": 63488 00:10:51.632 } 00:10:51.632 ] 00:10:51.632 }' 00:10:51.632 23:06:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.632 23:06:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.892 [2024-11-18 23:06:11.179403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.892 [2024-11-18 23:06:11.179490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.892 [2024-11-18 23:06:11.179526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:51.892 [2024-11-18 23:06:11.179552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.892 [2024-11-18 23:06:11.179952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.892 [2024-11-18 23:06:11.180007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.892 [2024-11-18 23:06:11.180107] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:51.892 [2024-11-18 23:06:11.180163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.892 pt2 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.892 [2024-11-18 23:06:11.191401] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.892 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.892 "name": "raid_bdev1", 00:10:51.892 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:51.892 "strip_size_kb": 0, 00:10:51.892 "state": "configuring", 00:10:51.892 "raid_level": "raid1", 00:10:51.892 "superblock": true, 00:10:51.892 "num_base_bdevs": 4, 00:10:51.892 "num_base_bdevs_discovered": 1, 00:10:51.892 "num_base_bdevs_operational": 4, 00:10:51.892 "base_bdevs_list": [ 00:10:51.892 { 00:10:51.892 "name": "pt1", 00:10:51.892 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.892 "is_configured": true, 00:10:51.892 "data_offset": 2048, 00:10:51.893 "data_size": 63488 00:10:51.893 }, 00:10:51.893 { 00:10:51.893 "name": null, 00:10:51.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.893 "is_configured": false, 00:10:51.893 "data_offset": 0, 00:10:51.893 "data_size": 63488 00:10:51.893 }, 00:10:51.893 { 00:10:51.893 "name": null, 00:10:51.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.893 "is_configured": false, 00:10:51.893 "data_offset": 2048, 00:10:51.893 "data_size": 63488 00:10:51.893 }, 00:10:51.893 { 00:10:51.893 "name": null, 00:10:51.893 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.893 "is_configured": false, 00:10:51.893 "data_offset": 2048, 00:10:51.893 "data_size": 63488 00:10:51.893 } 00:10:51.893 ] 00:10:51.893 }' 00:10:51.893 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.893 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.462 [2024-11-18 23:06:11.654636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:52.462 [2024-11-18 23:06:11.654745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.462 [2024-11-18 23:06:11.654779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:52.462 [2024-11-18 23:06:11.654808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.462 [2024-11-18 23:06:11.655191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.462 [2024-11-18 23:06:11.655277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:52.462 [2024-11-18 23:06:11.655392] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:52.462 [2024-11-18 23:06:11.655448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.462 pt2 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.462 [2024-11-18 23:06:11.666571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:52.462 [2024-11-18 23:06:11.666662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.462 [2024-11-18 23:06:11.666705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:52.462 [2024-11-18 23:06:11.666736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.462 [2024-11-18 23:06:11.667076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.462 [2024-11-18 23:06:11.667134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:52.462 [2024-11-18 23:06:11.667197] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:52.462 [2024-11-18 23:06:11.667226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:52.462 pt3 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.462 [2024-11-18 23:06:11.678561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:52.462 [2024-11-18 23:06:11.678643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.462 [2024-11-18 23:06:11.678682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:52.462 [2024-11-18 23:06:11.678713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.462 [2024-11-18 23:06:11.679036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.462 [2024-11-18 23:06:11.679094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:52.462 [2024-11-18 23:06:11.679170] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:52.462 [2024-11-18 23:06:11.679225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:52.462 [2024-11-18 23:06:11.679385] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:52.462 [2024-11-18 23:06:11.679433] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:52.462 [2024-11-18 23:06:11.679728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:52.462 [2024-11-18 23:06:11.679903] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:52.462 [2024-11-18 23:06:11.679948] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:52.462 [2024-11-18 23:06:11.680100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.462 pt4 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.462 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.463 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.463 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.463 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.463 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.463 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.463 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.463 "name": "raid_bdev1", 00:10:52.463 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:52.463 "strip_size_kb": 0, 00:10:52.463 "state": "online", 00:10:52.463 "raid_level": "raid1", 00:10:52.463 "superblock": true, 00:10:52.463 "num_base_bdevs": 4, 00:10:52.463 "num_base_bdevs_discovered": 4, 00:10:52.463 "num_base_bdevs_operational": 4, 00:10:52.463 "base_bdevs_list": [ 00:10:52.463 { 00:10:52.463 "name": "pt1", 00:10:52.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.463 "is_configured": true, 00:10:52.463 "data_offset": 2048, 00:10:52.463 "data_size": 63488 00:10:52.463 }, 00:10:52.463 { 00:10:52.463 "name": "pt2", 00:10:52.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.463 "is_configured": true, 00:10:52.463 "data_offset": 2048, 00:10:52.463 "data_size": 63488 00:10:52.463 }, 00:10:52.463 { 00:10:52.463 "name": "pt3", 00:10:52.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.463 "is_configured": true, 00:10:52.463 "data_offset": 2048, 00:10:52.463 "data_size": 63488 00:10:52.463 }, 00:10:52.463 { 00:10:52.463 "name": "pt4", 00:10:52.463 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.463 "is_configured": true, 00:10:52.463 "data_offset": 2048, 00:10:52.463 "data_size": 63488 00:10:52.463 } 00:10:52.463 ] 00:10:52.463 }' 00:10:52.463 23:06:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.463 23:06:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.032 [2024-11-18 23:06:12.162034] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.032 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.032 "name": "raid_bdev1", 00:10:53.032 "aliases": [ 00:10:53.032 "c43c841b-efe9-41ab-9ea8-19c440662836" 00:10:53.032 ], 00:10:53.032 "product_name": "Raid Volume", 00:10:53.032 "block_size": 512, 00:10:53.032 "num_blocks": 63488, 00:10:53.032 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:53.032 "assigned_rate_limits": { 00:10:53.032 "rw_ios_per_sec": 0, 00:10:53.032 "rw_mbytes_per_sec": 0, 00:10:53.032 "r_mbytes_per_sec": 0, 00:10:53.032 "w_mbytes_per_sec": 0 00:10:53.032 }, 00:10:53.032 "claimed": false, 00:10:53.032 "zoned": false, 00:10:53.032 "supported_io_types": { 00:10:53.032 "read": true, 00:10:53.032 "write": true, 00:10:53.032 "unmap": false, 00:10:53.032 "flush": false, 00:10:53.032 "reset": true, 00:10:53.032 "nvme_admin": false, 00:10:53.032 "nvme_io": false, 00:10:53.032 "nvme_io_md": false, 00:10:53.032 "write_zeroes": true, 00:10:53.032 "zcopy": false, 00:10:53.032 "get_zone_info": false, 00:10:53.032 "zone_management": false, 00:10:53.033 "zone_append": false, 00:10:53.033 "compare": false, 00:10:53.033 "compare_and_write": false, 00:10:53.033 "abort": false, 00:10:53.033 "seek_hole": false, 00:10:53.033 "seek_data": false, 00:10:53.033 "copy": false, 00:10:53.033 "nvme_iov_md": false 00:10:53.033 }, 00:10:53.033 "memory_domains": [ 00:10:53.033 { 00:10:53.033 "dma_device_id": "system", 00:10:53.033 "dma_device_type": 1 00:10:53.033 }, 00:10:53.033 { 00:10:53.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.033 "dma_device_type": 2 00:10:53.033 }, 00:10:53.033 { 00:10:53.033 "dma_device_id": "system", 00:10:53.033 "dma_device_type": 1 00:10:53.033 }, 00:10:53.033 { 00:10:53.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.033 "dma_device_type": 2 00:10:53.033 }, 00:10:53.033 { 00:10:53.033 "dma_device_id": "system", 00:10:53.033 "dma_device_type": 1 00:10:53.033 }, 00:10:53.033 { 00:10:53.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.033 "dma_device_type": 2 00:10:53.033 }, 00:10:53.033 { 00:10:53.033 "dma_device_id": "system", 00:10:53.033 "dma_device_type": 1 00:10:53.033 }, 00:10:53.033 { 00:10:53.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.033 "dma_device_type": 2 00:10:53.033 } 00:10:53.033 ], 00:10:53.033 "driver_specific": { 00:10:53.033 "raid": { 00:10:53.033 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:53.033 "strip_size_kb": 0, 00:10:53.033 "state": "online", 00:10:53.033 "raid_level": "raid1", 00:10:53.033 "superblock": true, 00:10:53.033 "num_base_bdevs": 4, 00:10:53.033 "num_base_bdevs_discovered": 4, 00:10:53.033 "num_base_bdevs_operational": 4, 00:10:53.033 "base_bdevs_list": [ 00:10:53.033 { 00:10:53.033 "name": "pt1", 00:10:53.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.033 "is_configured": true, 00:10:53.033 "data_offset": 2048, 00:10:53.033 "data_size": 63488 00:10:53.033 }, 00:10:53.033 { 00:10:53.033 "name": "pt2", 00:10:53.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.033 "is_configured": true, 00:10:53.033 "data_offset": 2048, 00:10:53.033 "data_size": 63488 00:10:53.033 }, 00:10:53.033 { 00:10:53.033 "name": "pt3", 00:10:53.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.033 "is_configured": true, 00:10:53.033 "data_offset": 2048, 00:10:53.033 "data_size": 63488 00:10:53.033 }, 00:10:53.033 { 00:10:53.033 "name": "pt4", 00:10:53.033 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.033 "is_configured": true, 00:10:53.033 "data_offset": 2048, 00:10:53.033 "data_size": 63488 00:10:53.033 } 00:10:53.033 ] 00:10:53.033 } 00:10:53.033 } 00:10:53.033 }' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:53.033 pt2 00:10:53.033 pt3 00:10:53.033 pt4' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.033 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.293 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.293 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.293 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.294 [2024-11-18 23:06:12.449523] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c43c841b-efe9-41ab-9ea8-19c440662836 '!=' c43c841b-efe9-41ab-9ea8-19c440662836 ']' 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.294 [2024-11-18 23:06:12.497189] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.294 "name": "raid_bdev1", 00:10:53.294 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:53.294 "strip_size_kb": 0, 00:10:53.294 "state": "online", 00:10:53.294 "raid_level": "raid1", 00:10:53.294 "superblock": true, 00:10:53.294 "num_base_bdevs": 4, 00:10:53.294 "num_base_bdevs_discovered": 3, 00:10:53.294 "num_base_bdevs_operational": 3, 00:10:53.294 "base_bdevs_list": [ 00:10:53.294 { 00:10:53.294 "name": null, 00:10:53.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.294 "is_configured": false, 00:10:53.294 "data_offset": 0, 00:10:53.294 "data_size": 63488 00:10:53.294 }, 00:10:53.294 { 00:10:53.294 "name": "pt2", 00:10:53.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.294 "is_configured": true, 00:10:53.294 "data_offset": 2048, 00:10:53.294 "data_size": 63488 00:10:53.294 }, 00:10:53.294 { 00:10:53.294 "name": "pt3", 00:10:53.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.294 "is_configured": true, 00:10:53.294 "data_offset": 2048, 00:10:53.294 "data_size": 63488 00:10:53.294 }, 00:10:53.294 { 00:10:53.294 "name": "pt4", 00:10:53.294 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.294 "is_configured": true, 00:10:53.294 "data_offset": 2048, 00:10:53.294 "data_size": 63488 00:10:53.294 } 00:10:53.294 ] 00:10:53.294 }' 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.294 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.554 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:53.554 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.554 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.554 [2024-11-18 23:06:12.908438] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.554 [2024-11-18 23:06:12.908504] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.554 [2024-11-18 23:06:12.908597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.554 [2024-11-18 23:06:12.908679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.554 [2024-11-18 23:06:12.908731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:53.554 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.554 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.554 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.554 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.554 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:53.554 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.815 23:06:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.815 [2024-11-18 23:06:13.008307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.815 [2024-11-18 23:06:13.008398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.815 [2024-11-18 23:06:13.008447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:53.815 [2024-11-18 23:06:13.008495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.815 [2024-11-18 23:06:13.010591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.815 [2024-11-18 23:06:13.010664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:53.815 [2024-11-18 23:06:13.010749] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:53.815 [2024-11-18 23:06:13.010825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.815 pt2 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.815 "name": "raid_bdev1", 00:10:53.815 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:53.815 "strip_size_kb": 0, 00:10:53.815 "state": "configuring", 00:10:53.815 "raid_level": "raid1", 00:10:53.815 "superblock": true, 00:10:53.815 "num_base_bdevs": 4, 00:10:53.815 "num_base_bdevs_discovered": 1, 00:10:53.815 "num_base_bdevs_operational": 3, 00:10:53.815 "base_bdevs_list": [ 00:10:53.815 { 00:10:53.815 "name": null, 00:10:53.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.815 "is_configured": false, 00:10:53.815 "data_offset": 2048, 00:10:53.815 "data_size": 63488 00:10:53.815 }, 00:10:53.815 { 00:10:53.815 "name": "pt2", 00:10:53.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.815 "is_configured": true, 00:10:53.815 "data_offset": 2048, 00:10:53.815 "data_size": 63488 00:10:53.815 }, 00:10:53.815 { 00:10:53.815 "name": null, 00:10:53.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.815 "is_configured": false, 00:10:53.815 "data_offset": 2048, 00:10:53.815 "data_size": 63488 00:10:53.815 }, 00:10:53.815 { 00:10:53.815 "name": null, 00:10:53.815 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.815 "is_configured": false, 00:10:53.815 "data_offset": 2048, 00:10:53.815 "data_size": 63488 00:10:53.815 } 00:10:53.815 ] 00:10:53.815 }' 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.815 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.385 [2024-11-18 23:06:13.487493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:54.385 [2024-11-18 23:06:13.487583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.385 [2024-11-18 23:06:13.487618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:54.385 [2024-11-18 23:06:13.487649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.385 [2024-11-18 23:06:13.488053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.385 [2024-11-18 23:06:13.488113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:54.385 [2024-11-18 23:06:13.488205] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:54.385 [2024-11-18 23:06:13.488254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:54.385 pt3 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.385 "name": "raid_bdev1", 00:10:54.385 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:54.385 "strip_size_kb": 0, 00:10:54.385 "state": "configuring", 00:10:54.385 "raid_level": "raid1", 00:10:54.385 "superblock": true, 00:10:54.385 "num_base_bdevs": 4, 00:10:54.385 "num_base_bdevs_discovered": 2, 00:10:54.385 "num_base_bdevs_operational": 3, 00:10:54.385 "base_bdevs_list": [ 00:10:54.385 { 00:10:54.385 "name": null, 00:10:54.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.385 "is_configured": false, 00:10:54.385 "data_offset": 2048, 00:10:54.385 "data_size": 63488 00:10:54.385 }, 00:10:54.385 { 00:10:54.385 "name": "pt2", 00:10:54.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.385 "is_configured": true, 00:10:54.385 "data_offset": 2048, 00:10:54.385 "data_size": 63488 00:10:54.385 }, 00:10:54.385 { 00:10:54.385 "name": "pt3", 00:10:54.385 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:54.385 "is_configured": true, 00:10:54.385 "data_offset": 2048, 00:10:54.385 "data_size": 63488 00:10:54.385 }, 00:10:54.385 { 00:10:54.385 "name": null, 00:10:54.385 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:54.385 "is_configured": false, 00:10:54.385 "data_offset": 2048, 00:10:54.385 "data_size": 63488 00:10:54.385 } 00:10:54.385 ] 00:10:54.385 }' 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.385 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.645 [2024-11-18 23:06:13.926776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:54.645 [2024-11-18 23:06:13.926884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.645 [2024-11-18 23:06:13.926923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:10:54.645 [2024-11-18 23:06:13.926951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.645 [2024-11-18 23:06:13.927376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.645 [2024-11-18 23:06:13.927436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:54.645 [2024-11-18 23:06:13.927544] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:54.645 [2024-11-18 23:06:13.927608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:54.645 [2024-11-18 23:06:13.927748] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:54.645 [2024-11-18 23:06:13.927790] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:54.645 [2024-11-18 23:06:13.928044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:54.645 [2024-11-18 23:06:13.928210] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:54.645 [2024-11-18 23:06:13.928250] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:54.645 [2024-11-18 23:06:13.928423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.645 pt4 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.645 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.646 "name": "raid_bdev1", 00:10:54.646 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:54.646 "strip_size_kb": 0, 00:10:54.646 "state": "online", 00:10:54.646 "raid_level": "raid1", 00:10:54.646 "superblock": true, 00:10:54.646 "num_base_bdevs": 4, 00:10:54.646 "num_base_bdevs_discovered": 3, 00:10:54.646 "num_base_bdevs_operational": 3, 00:10:54.646 "base_bdevs_list": [ 00:10:54.646 { 00:10:54.646 "name": null, 00:10:54.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.646 "is_configured": false, 00:10:54.646 "data_offset": 2048, 00:10:54.646 "data_size": 63488 00:10:54.646 }, 00:10:54.646 { 00:10:54.646 "name": "pt2", 00:10:54.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.646 "is_configured": true, 00:10:54.646 "data_offset": 2048, 00:10:54.646 "data_size": 63488 00:10:54.646 }, 00:10:54.646 { 00:10:54.646 "name": "pt3", 00:10:54.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:54.646 "is_configured": true, 00:10:54.646 "data_offset": 2048, 00:10:54.646 "data_size": 63488 00:10:54.646 }, 00:10:54.646 { 00:10:54.646 "name": "pt4", 00:10:54.646 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:54.646 "is_configured": true, 00:10:54.646 "data_offset": 2048, 00:10:54.646 "data_size": 63488 00:10:54.646 } 00:10:54.646 ] 00:10:54.646 }' 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.646 23:06:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.219 [2024-11-18 23:06:14.306108] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.219 [2024-11-18 23:06:14.306176] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.219 [2024-11-18 23:06:14.306271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.219 [2024-11-18 23:06:14.306379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.219 [2024-11-18 23:06:14.306415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.219 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.219 [2024-11-18 23:06:14.377997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:55.219 [2024-11-18 23:06:14.378081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.219 [2024-11-18 23:06:14.378136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:10:55.219 [2024-11-18 23:06:14.378162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.219 [2024-11-18 23:06:14.380281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.219 [2024-11-18 23:06:14.380360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:55.219 [2024-11-18 23:06:14.380446] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:55.220 [2024-11-18 23:06:14.380512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:55.220 [2024-11-18 23:06:14.380643] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:55.220 [2024-11-18 23:06:14.380715] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.220 [2024-11-18 23:06:14.380768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:10:55.220 [2024-11-18 23:06:14.380842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:55.220 [2024-11-18 23:06:14.380965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:55.220 pt1 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.220 "name": "raid_bdev1", 00:10:55.220 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:55.220 "strip_size_kb": 0, 00:10:55.220 "state": "configuring", 00:10:55.220 "raid_level": "raid1", 00:10:55.220 "superblock": true, 00:10:55.220 "num_base_bdevs": 4, 00:10:55.220 "num_base_bdevs_discovered": 2, 00:10:55.220 "num_base_bdevs_operational": 3, 00:10:55.220 "base_bdevs_list": [ 00:10:55.220 { 00:10:55.220 "name": null, 00:10:55.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.220 "is_configured": false, 00:10:55.220 "data_offset": 2048, 00:10:55.220 "data_size": 63488 00:10:55.220 }, 00:10:55.220 { 00:10:55.220 "name": "pt2", 00:10:55.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.220 "is_configured": true, 00:10:55.220 "data_offset": 2048, 00:10:55.220 "data_size": 63488 00:10:55.220 }, 00:10:55.220 { 00:10:55.220 "name": "pt3", 00:10:55.220 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.220 "is_configured": true, 00:10:55.220 "data_offset": 2048, 00:10:55.220 "data_size": 63488 00:10:55.220 }, 00:10:55.220 { 00:10:55.220 "name": null, 00:10:55.220 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:55.220 "is_configured": false, 00:10:55.220 "data_offset": 2048, 00:10:55.220 "data_size": 63488 00:10:55.220 } 00:10:55.220 ] 00:10:55.220 }' 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.220 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.485 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:55.485 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:55.485 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.485 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.749 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.749 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:55.749 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:55.749 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.749 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.749 [2024-11-18 23:06:14.909082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:55.749 [2024-11-18 23:06:14.909189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.750 [2024-11-18 23:06:14.909234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:10:55.750 [2024-11-18 23:06:14.909267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.750 [2024-11-18 23:06:14.909750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.750 [2024-11-18 23:06:14.909780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:55.750 [2024-11-18 23:06:14.909857] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:55.750 [2024-11-18 23:06:14.909883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:55.750 [2024-11-18 23:06:14.909990] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:55.750 [2024-11-18 23:06:14.910005] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:55.750 [2024-11-18 23:06:14.910258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:55.750 [2024-11-18 23:06:14.910410] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:55.750 [2024-11-18 23:06:14.910425] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:55.750 [2024-11-18 23:06:14.910548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.750 pt4 00:10:55.750 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.750 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:55.750 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.750 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.750 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.750 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.751 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.751 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.751 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.751 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.751 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.751 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.751 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.751 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.751 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.751 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.751 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.751 "name": "raid_bdev1", 00:10:55.751 "uuid": "c43c841b-efe9-41ab-9ea8-19c440662836", 00:10:55.751 "strip_size_kb": 0, 00:10:55.751 "state": "online", 00:10:55.751 "raid_level": "raid1", 00:10:55.751 "superblock": true, 00:10:55.751 "num_base_bdevs": 4, 00:10:55.751 "num_base_bdevs_discovered": 3, 00:10:55.751 "num_base_bdevs_operational": 3, 00:10:55.751 "base_bdevs_list": [ 00:10:55.751 { 00:10:55.751 "name": null, 00:10:55.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.752 "is_configured": false, 00:10:55.752 "data_offset": 2048, 00:10:55.752 "data_size": 63488 00:10:55.752 }, 00:10:55.752 { 00:10:55.752 "name": "pt2", 00:10:55.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.752 "is_configured": true, 00:10:55.752 "data_offset": 2048, 00:10:55.752 "data_size": 63488 00:10:55.752 }, 00:10:55.752 { 00:10:55.752 "name": "pt3", 00:10:55.752 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.752 "is_configured": true, 00:10:55.752 "data_offset": 2048, 00:10:55.752 "data_size": 63488 00:10:55.752 }, 00:10:55.752 { 00:10:55.752 "name": "pt4", 00:10:55.752 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:55.752 "is_configured": true, 00:10:55.752 "data_offset": 2048, 00:10:55.752 "data_size": 63488 00:10:55.752 } 00:10:55.752 ] 00:10:55.752 }' 00:10:55.752 23:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.752 23:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.016 23:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:56.016 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.016 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.016 23:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.276 [2024-11-18 23:06:15.436486] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c43c841b-efe9-41ab-9ea8-19c440662836 '!=' c43c841b-efe9-41ab-9ea8-19c440662836 ']' 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85233 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85233 ']' 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85233 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85233 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85233' 00:10:56.276 killing process with pid 85233 00:10:56.276 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85233 00:10:56.276 [2024-11-18 23:06:15.511156] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.276 [2024-11-18 23:06:15.511239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.276 [2024-11-18 23:06:15.511324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.277 [2024-11-18 23:06:15.511335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:56.277 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85233 00:10:56.277 [2024-11-18 23:06:15.553007] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.541 ************************************ 00:10:56.541 END TEST raid_superblock_test 00:10:56.541 ************************************ 00:10:56.541 23:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:56.541 00:10:56.541 real 0m7.083s 00:10:56.541 user 0m11.909s 00:10:56.541 sys 0m1.476s 00:10:56.541 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.541 23:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.541 23:06:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:56.541 23:06:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:56.541 23:06:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.541 23:06:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.541 ************************************ 00:10:56.541 START TEST raid_read_error_test 00:10:56.541 ************************************ 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1Y94i4h1xm 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85704 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85704 00:10:56.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85704 ']' 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:56.541 23:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.801 [2024-11-18 23:06:15.965422] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:56.801 [2024-11-18 23:06:15.965622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85704 ] 00:10:56.801 [2024-11-18 23:06:16.124209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.801 [2024-11-18 23:06:16.168576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.061 [2024-11-18 23:06:16.210815] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.061 [2024-11-18 23:06:16.210849] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.631 BaseBdev1_malloc 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.631 true 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.631 [2024-11-18 23:06:16.820639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:57.631 [2024-11-18 23:06:16.820730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.631 [2024-11-18 23:06:16.820753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:57.631 [2024-11-18 23:06:16.820762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.631 [2024-11-18 23:06:16.822885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.631 [2024-11-18 23:06:16.822926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:57.631 BaseBdev1 00:10:57.631 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 BaseBdev2_malloc 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 true 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 [2024-11-18 23:06:16.878558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:57.632 [2024-11-18 23:06:16.878625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.632 [2024-11-18 23:06:16.878653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:57.632 [2024-11-18 23:06:16.878667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.632 [2024-11-18 23:06:16.881733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.632 [2024-11-18 23:06:16.881781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:57.632 BaseBdev2 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 BaseBdev3_malloc 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 true 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 [2024-11-18 23:06:16.919208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:57.632 [2024-11-18 23:06:16.919273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.632 [2024-11-18 23:06:16.919316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:57.632 [2024-11-18 23:06:16.919325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.632 [2024-11-18 23:06:16.921288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.632 [2024-11-18 23:06:16.921330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:57.632 BaseBdev3 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 BaseBdev4_malloc 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 true 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 [2024-11-18 23:06:16.959637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:57.632 [2024-11-18 23:06:16.959682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.632 [2024-11-18 23:06:16.959718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:57.632 [2024-11-18 23:06:16.959726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.632 [2024-11-18 23:06:16.961703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.632 [2024-11-18 23:06:16.961737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:57.632 BaseBdev4 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 [2024-11-18 23:06:16.971660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.632 [2024-11-18 23:06:16.973484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.632 [2024-11-18 23:06:16.973564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.632 [2024-11-18 23:06:16.973613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.632 [2024-11-18 23:06:16.973788] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:57.632 [2024-11-18 23:06:16.973800] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:57.632 [2024-11-18 23:06:16.974038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:57.632 [2024-11-18 23:06:16.974154] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:57.632 [2024-11-18 23:06:16.974164] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:57.632 [2024-11-18 23:06:16.974297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.632 23:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 23:06:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.891 23:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.891 "name": "raid_bdev1", 00:10:57.891 "uuid": "87ad62ff-7796-495f-b081-2c1c57122524", 00:10:57.891 "strip_size_kb": 0, 00:10:57.891 "state": "online", 00:10:57.891 "raid_level": "raid1", 00:10:57.891 "superblock": true, 00:10:57.891 "num_base_bdevs": 4, 00:10:57.891 "num_base_bdevs_discovered": 4, 00:10:57.891 "num_base_bdevs_operational": 4, 00:10:57.891 "base_bdevs_list": [ 00:10:57.891 { 00:10:57.891 "name": "BaseBdev1", 00:10:57.891 "uuid": "6feb39d2-1e1d-5d61-9efe-0c7fd7bba7e9", 00:10:57.891 "is_configured": true, 00:10:57.891 "data_offset": 2048, 00:10:57.891 "data_size": 63488 00:10:57.891 }, 00:10:57.891 { 00:10:57.891 "name": "BaseBdev2", 00:10:57.891 "uuid": "064d8737-c79f-5ea0-9a22-7d9aa1d8b96d", 00:10:57.891 "is_configured": true, 00:10:57.891 "data_offset": 2048, 00:10:57.891 "data_size": 63488 00:10:57.891 }, 00:10:57.891 { 00:10:57.891 "name": "BaseBdev3", 00:10:57.891 "uuid": "9e4c25d4-ace7-555f-8fa5-b14cdbac6638", 00:10:57.891 "is_configured": true, 00:10:57.892 "data_offset": 2048, 00:10:57.892 "data_size": 63488 00:10:57.892 }, 00:10:57.892 { 00:10:57.892 "name": "BaseBdev4", 00:10:57.892 "uuid": "d3abbd91-edc2-5c54-a514-ad13b2f08690", 00:10:57.892 "is_configured": true, 00:10:57.892 "data_offset": 2048, 00:10:57.892 "data_size": 63488 00:10:57.892 } 00:10:57.892 ] 00:10:57.892 }' 00:10:57.892 23:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.892 23:06:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.151 23:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:58.151 23:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:58.151 [2024-11-18 23:06:17.527076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.098 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.358 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.358 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.358 "name": "raid_bdev1", 00:10:59.358 "uuid": "87ad62ff-7796-495f-b081-2c1c57122524", 00:10:59.358 "strip_size_kb": 0, 00:10:59.358 "state": "online", 00:10:59.358 "raid_level": "raid1", 00:10:59.358 "superblock": true, 00:10:59.358 "num_base_bdevs": 4, 00:10:59.358 "num_base_bdevs_discovered": 4, 00:10:59.358 "num_base_bdevs_operational": 4, 00:10:59.358 "base_bdevs_list": [ 00:10:59.358 { 00:10:59.358 "name": "BaseBdev1", 00:10:59.358 "uuid": "6feb39d2-1e1d-5d61-9efe-0c7fd7bba7e9", 00:10:59.358 "is_configured": true, 00:10:59.358 "data_offset": 2048, 00:10:59.358 "data_size": 63488 00:10:59.358 }, 00:10:59.358 { 00:10:59.358 "name": "BaseBdev2", 00:10:59.358 "uuid": "064d8737-c79f-5ea0-9a22-7d9aa1d8b96d", 00:10:59.358 "is_configured": true, 00:10:59.358 "data_offset": 2048, 00:10:59.358 "data_size": 63488 00:10:59.358 }, 00:10:59.358 { 00:10:59.358 "name": "BaseBdev3", 00:10:59.358 "uuid": "9e4c25d4-ace7-555f-8fa5-b14cdbac6638", 00:10:59.358 "is_configured": true, 00:10:59.358 "data_offset": 2048, 00:10:59.358 "data_size": 63488 00:10:59.358 }, 00:10:59.358 { 00:10:59.358 "name": "BaseBdev4", 00:10:59.358 "uuid": "d3abbd91-edc2-5c54-a514-ad13b2f08690", 00:10:59.358 "is_configured": true, 00:10:59.358 "data_offset": 2048, 00:10:59.358 "data_size": 63488 00:10:59.358 } 00:10:59.358 ] 00:10:59.358 }' 00:10:59.358 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.358 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.618 [2024-11-18 23:06:18.911266] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.618 [2024-11-18 23:06:18.911330] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.618 [2024-11-18 23:06:18.913916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.618 [2024-11-18 23:06:18.913990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.618 [2024-11-18 23:06:18.914107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.618 [2024-11-18 23:06:18.914117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:59.618 { 00:10:59.618 "results": [ 00:10:59.618 { 00:10:59.618 "job": "raid_bdev1", 00:10:59.618 "core_mask": "0x1", 00:10:59.618 "workload": "randrw", 00:10:59.618 "percentage": 50, 00:10:59.618 "status": "finished", 00:10:59.618 "queue_depth": 1, 00:10:59.618 "io_size": 131072, 00:10:59.618 "runtime": 1.385056, 00:10:59.618 "iops": 11909.265762539566, 00:10:59.618 "mibps": 1488.6582203174457, 00:10:59.618 "io_failed": 0, 00:10:59.618 "io_timeout": 0, 00:10:59.618 "avg_latency_us": 81.51258687626658, 00:10:59.618 "min_latency_us": 21.799126637554586, 00:10:59.618 "max_latency_us": 1488.1537117903931 00:10:59.618 } 00:10:59.618 ], 00:10:59.618 "core_count": 1 00:10:59.618 } 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85704 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85704 ']' 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85704 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85704 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:59.618 killing process with pid 85704 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85704' 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85704 00:10:59.618 [2024-11-18 23:06:18.960516] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.618 23:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85704 00:10:59.879 [2024-11-18 23:06:18.994463] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.879 23:06:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1Y94i4h1xm 00:10:59.879 23:06:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:59.879 23:06:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:59.879 23:06:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:59.879 23:06:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:59.879 23:06:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.879 23:06:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:59.879 ************************************ 00:10:59.879 END TEST raid_read_error_test 00:10:59.879 ************************************ 00:10:59.879 23:06:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:59.879 00:10:59.879 real 0m3.372s 00:10:59.879 user 0m4.243s 00:10:59.879 sys 0m0.554s 00:10:59.879 23:06:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.879 23:06:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.138 23:06:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:00.139 23:06:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:00.139 23:06:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.139 23:06:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.139 ************************************ 00:11:00.139 START TEST raid_write_error_test 00:11:00.139 ************************************ 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.V1bwP0NNxw 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85838 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85838 00:11:00.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85838 ']' 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.139 23:06:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.139 [2024-11-18 23:06:19.408894] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:00.139 [2024-11-18 23:06:19.409014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85838 ] 00:11:00.399 [2024-11-18 23:06:19.568227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.399 [2024-11-18 23:06:19.612957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.399 [2024-11-18 23:06:19.655015] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.399 [2024-11-18 23:06:19.655050] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.970 BaseBdev1_malloc 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.970 true 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.970 [2024-11-18 23:06:20.244921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:00.970 [2024-11-18 23:06:20.244995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.970 [2024-11-18 23:06:20.245023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:00.970 [2024-11-18 23:06:20.245033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.970 [2024-11-18 23:06:20.247111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.970 [2024-11-18 23:06:20.247188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:00.970 BaseBdev1 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.970 BaseBdev2_malloc 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:00.970 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.971 true 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.971 [2024-11-18 23:06:20.298262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:00.971 [2024-11-18 23:06:20.298344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.971 [2024-11-18 23:06:20.298374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:00.971 [2024-11-18 23:06:20.298388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.971 [2024-11-18 23:06:20.301511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.971 [2024-11-18 23:06:20.301560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:00.971 BaseBdev2 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.971 BaseBdev3_malloc 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.971 true 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.971 [2024-11-18 23:06:20.339354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:00.971 [2024-11-18 23:06:20.339440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.971 [2024-11-18 23:06:20.339463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:00.971 [2024-11-18 23:06:20.339473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.971 [2024-11-18 23:06:20.341475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.971 [2024-11-18 23:06:20.341509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:00.971 BaseBdev3 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.971 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.235 BaseBdev4_malloc 00:11:01.235 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.236 true 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.236 [2024-11-18 23:06:20.380031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:01.236 [2024-11-18 23:06:20.380077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.236 [2024-11-18 23:06:20.380100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:01.236 [2024-11-18 23:06:20.380108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.236 [2024-11-18 23:06:20.382162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.236 [2024-11-18 23:06:20.382198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:01.236 BaseBdev4 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.236 [2024-11-18 23:06:20.392052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.236 [2024-11-18 23:06:20.393860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.236 [2024-11-18 23:06:20.393941] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.236 [2024-11-18 23:06:20.393990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:01.236 [2024-11-18 23:06:20.394172] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:01.236 [2024-11-18 23:06:20.394187] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:01.236 [2024-11-18 23:06:20.394478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:01.236 [2024-11-18 23:06:20.394621] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:01.236 [2024-11-18 23:06:20.394639] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:01.236 [2024-11-18 23:06:20.394753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.236 "name": "raid_bdev1", 00:11:01.236 "uuid": "fddac61d-429e-4d07-ad30-fb688999717d", 00:11:01.236 "strip_size_kb": 0, 00:11:01.236 "state": "online", 00:11:01.236 "raid_level": "raid1", 00:11:01.236 "superblock": true, 00:11:01.236 "num_base_bdevs": 4, 00:11:01.236 "num_base_bdevs_discovered": 4, 00:11:01.236 "num_base_bdevs_operational": 4, 00:11:01.236 "base_bdevs_list": [ 00:11:01.236 { 00:11:01.236 "name": "BaseBdev1", 00:11:01.236 "uuid": "50886185-0de6-5d0a-86be-7cb23573e8e5", 00:11:01.236 "is_configured": true, 00:11:01.236 "data_offset": 2048, 00:11:01.236 "data_size": 63488 00:11:01.236 }, 00:11:01.236 { 00:11:01.236 "name": "BaseBdev2", 00:11:01.236 "uuid": "863fa820-cbe6-539b-b100-d7556aac27ff", 00:11:01.236 "is_configured": true, 00:11:01.236 "data_offset": 2048, 00:11:01.236 "data_size": 63488 00:11:01.236 }, 00:11:01.236 { 00:11:01.236 "name": "BaseBdev3", 00:11:01.236 "uuid": "de3d00e9-fe98-567d-9f0c-1de20aedb4a4", 00:11:01.236 "is_configured": true, 00:11:01.236 "data_offset": 2048, 00:11:01.236 "data_size": 63488 00:11:01.236 }, 00:11:01.236 { 00:11:01.236 "name": "BaseBdev4", 00:11:01.236 "uuid": "051f920a-835a-54db-98a1-16e1a002b2bd", 00:11:01.236 "is_configured": true, 00:11:01.236 "data_offset": 2048, 00:11:01.236 "data_size": 63488 00:11:01.236 } 00:11:01.236 ] 00:11:01.236 }' 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.236 23:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.498 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:01.498 23:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:01.759 [2024-11-18 23:06:20.943485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:02.699 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:02.699 23:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.699 23:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.700 [2024-11-18 23:06:21.861638] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:02.700 [2024-11-18 23:06:21.861782] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.700 [2024-11-18 23:06:21.862037] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.700 "name": "raid_bdev1", 00:11:02.700 "uuid": "fddac61d-429e-4d07-ad30-fb688999717d", 00:11:02.700 "strip_size_kb": 0, 00:11:02.700 "state": "online", 00:11:02.700 "raid_level": "raid1", 00:11:02.700 "superblock": true, 00:11:02.700 "num_base_bdevs": 4, 00:11:02.700 "num_base_bdevs_discovered": 3, 00:11:02.700 "num_base_bdevs_operational": 3, 00:11:02.700 "base_bdevs_list": [ 00:11:02.700 { 00:11:02.700 "name": null, 00:11:02.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.700 "is_configured": false, 00:11:02.700 "data_offset": 0, 00:11:02.700 "data_size": 63488 00:11:02.700 }, 00:11:02.700 { 00:11:02.700 "name": "BaseBdev2", 00:11:02.700 "uuid": "863fa820-cbe6-539b-b100-d7556aac27ff", 00:11:02.700 "is_configured": true, 00:11:02.700 "data_offset": 2048, 00:11:02.700 "data_size": 63488 00:11:02.700 }, 00:11:02.700 { 00:11:02.700 "name": "BaseBdev3", 00:11:02.700 "uuid": "de3d00e9-fe98-567d-9f0c-1de20aedb4a4", 00:11:02.700 "is_configured": true, 00:11:02.700 "data_offset": 2048, 00:11:02.700 "data_size": 63488 00:11:02.700 }, 00:11:02.700 { 00:11:02.700 "name": "BaseBdev4", 00:11:02.700 "uuid": "051f920a-835a-54db-98a1-16e1a002b2bd", 00:11:02.700 "is_configured": true, 00:11:02.700 "data_offset": 2048, 00:11:02.700 "data_size": 63488 00:11:02.700 } 00:11:02.700 ] 00:11:02.700 }' 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.700 23:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.959 23:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:02.959 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.959 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.959 [2024-11-18 23:06:22.280424] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.959 [2024-11-18 23:06:22.280522] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.959 [2024-11-18 23:06:22.282976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.959 [2024-11-18 23:06:22.283065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.959 [2024-11-18 23:06:22.283179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.959 [2024-11-18 23:06:22.283246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:02.959 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.959 { 00:11:02.959 "results": [ 00:11:02.959 { 00:11:02.959 "job": "raid_bdev1", 00:11:02.959 "core_mask": "0x1", 00:11:02.959 "workload": "randrw", 00:11:02.959 "percentage": 50, 00:11:02.959 "status": "finished", 00:11:02.959 "queue_depth": 1, 00:11:02.959 "io_size": 131072, 00:11:02.959 "runtime": 1.337563, 00:11:02.959 "iops": 12785.19217412563, 00:11:02.959 "mibps": 1598.1490217657038, 00:11:02.959 "io_failed": 0, 00:11:02.959 "io_timeout": 0, 00:11:02.959 "avg_latency_us": 75.74429626807492, 00:11:02.959 "min_latency_us": 21.687336244541484, 00:11:02.959 "max_latency_us": 1402.2986899563318 00:11:02.959 } 00:11:02.959 ], 00:11:02.959 "core_count": 1 00:11:02.959 } 00:11:02.959 23:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85838 00:11:02.959 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85838 ']' 00:11:02.959 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85838 00:11:02.959 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:02.959 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.960 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85838 00:11:02.960 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.960 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.960 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85838' 00:11:02.960 killing process with pid 85838 00:11:02.960 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85838 00:11:02.960 [2024-11-18 23:06:22.323406] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.960 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85838 00:11:03.219 [2024-11-18 23:06:22.358314] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.219 23:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.V1bwP0NNxw 00:11:03.219 23:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:03.219 23:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:03.479 23:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:03.479 23:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:03.479 23:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.479 23:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:03.479 23:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:03.479 00:11:03.479 real 0m3.297s 00:11:03.479 user 0m4.133s 00:11:03.479 sys 0m0.535s 00:11:03.479 ************************************ 00:11:03.479 END TEST raid_write_error_test 00:11:03.479 ************************************ 00:11:03.479 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.479 23:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.479 23:06:22 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:03.479 23:06:22 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:03.479 23:06:22 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:03.479 23:06:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:03.479 23:06:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.479 23:06:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.479 ************************************ 00:11:03.479 START TEST raid_rebuild_test 00:11:03.479 ************************************ 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85965 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85965 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 85965 ']' 00:11:03.479 23:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.480 23:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.480 23:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.480 23:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.480 23:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.480 [2024-11-18 23:06:22.774965] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:03.480 [2024-11-18 23:06:22.775165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:03.480 Zero copy mechanism will not be used. 00:11:03.480 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85965 ] 00:11:03.743 [2024-11-18 23:06:22.935083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.743 [2024-11-18 23:06:22.979760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.743 [2024-11-18 23:06:23.022019] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.743 [2024-11-18 23:06:23.022130] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.314 BaseBdev1_malloc 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.314 [2024-11-18 23:06:23.620417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:04.314 [2024-11-18 23:06:23.620542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.314 [2024-11-18 23:06:23.620574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:04.314 [2024-11-18 23:06:23.620590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.314 [2024-11-18 23:06:23.622657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.314 [2024-11-18 23:06:23.622698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:04.314 BaseBdev1 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.314 BaseBdev2_malloc 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.314 [2024-11-18 23:06:23.657807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:04.314 [2024-11-18 23:06:23.657911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.314 [2024-11-18 23:06:23.657938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:04.314 [2024-11-18 23:06:23.657948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.314 [2024-11-18 23:06:23.660297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.314 [2024-11-18 23:06:23.660335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:04.314 BaseBdev2 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.314 spare_malloc 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.314 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.575 spare_delay 00:11:04.575 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.575 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:04.575 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.575 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.575 [2024-11-18 23:06:23.698114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:04.575 [2024-11-18 23:06:23.698162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.575 [2024-11-18 23:06:23.698198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:04.575 [2024-11-18 23:06:23.698206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.575 [2024-11-18 23:06:23.700307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.575 [2024-11-18 23:06:23.700336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:04.575 spare 00:11:04.575 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.576 [2024-11-18 23:06:23.710133] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.576 [2024-11-18 23:06:23.711942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.576 [2024-11-18 23:06:23.712025] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:04.576 [2024-11-18 23:06:23.712037] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:04.576 [2024-11-18 23:06:23.712288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:04.576 [2024-11-18 23:06:23.712432] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:04.576 [2024-11-18 23:06:23.712451] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:04.576 [2024-11-18 23:06:23.712573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.576 "name": "raid_bdev1", 00:11:04.576 "uuid": "b1710738-1462-4625-9ed3-52d9a1842365", 00:11:04.576 "strip_size_kb": 0, 00:11:04.576 "state": "online", 00:11:04.576 "raid_level": "raid1", 00:11:04.576 "superblock": false, 00:11:04.576 "num_base_bdevs": 2, 00:11:04.576 "num_base_bdevs_discovered": 2, 00:11:04.576 "num_base_bdevs_operational": 2, 00:11:04.576 "base_bdevs_list": [ 00:11:04.576 { 00:11:04.576 "name": "BaseBdev1", 00:11:04.576 "uuid": "a0bfd5ce-c2e6-5671-a5b2-a0204289a96c", 00:11:04.576 "is_configured": true, 00:11:04.576 "data_offset": 0, 00:11:04.576 "data_size": 65536 00:11:04.576 }, 00:11:04.576 { 00:11:04.576 "name": "BaseBdev2", 00:11:04.576 "uuid": "a50f89ae-b4cd-542f-9457-7d3c73ea88c5", 00:11:04.576 "is_configured": true, 00:11:04.576 "data_offset": 0, 00:11:04.576 "data_size": 65536 00:11:04.576 } 00:11:04.576 ] 00:11:04.576 }' 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.576 23:06:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.837 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:04.837 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:04.837 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.837 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.837 [2024-11-18 23:06:24.169630] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.837 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.837 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:04.837 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.837 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.837 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:04.837 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:05.097 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:05.097 [2024-11-18 23:06:24.428958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:05.097 /dev/nbd0 00:11:05.355 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:05.355 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:05.356 1+0 records in 00:11:05.356 1+0 records out 00:11:05.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401231 s, 10.2 MB/s 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:05.356 23:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:09.554 65536+0 records in 00:11:09.554 65536+0 records out 00:11:09.554 33554432 bytes (34 MB, 32 MiB) copied, 3.55909 s, 9.4 MB/s 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:09.554 [2024-11-18 23:06:28.279425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.554 [2024-11-18 23:06:28.291500] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.554 "name": "raid_bdev1", 00:11:09.554 "uuid": "b1710738-1462-4625-9ed3-52d9a1842365", 00:11:09.554 "strip_size_kb": 0, 00:11:09.554 "state": "online", 00:11:09.554 "raid_level": "raid1", 00:11:09.554 "superblock": false, 00:11:09.554 "num_base_bdevs": 2, 00:11:09.554 "num_base_bdevs_discovered": 1, 00:11:09.554 "num_base_bdevs_operational": 1, 00:11:09.554 "base_bdevs_list": [ 00:11:09.554 { 00:11:09.554 "name": null, 00:11:09.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.554 "is_configured": false, 00:11:09.554 "data_offset": 0, 00:11:09.554 "data_size": 65536 00:11:09.554 }, 00:11:09.554 { 00:11:09.554 "name": "BaseBdev2", 00:11:09.554 "uuid": "a50f89ae-b4cd-542f-9457-7d3c73ea88c5", 00:11:09.554 "is_configured": true, 00:11:09.554 "data_offset": 0, 00:11:09.554 "data_size": 65536 00:11:09.554 } 00:11:09.554 ] 00:11:09.554 }' 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.554 [2024-11-18 23:06:28.734832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:09.554 [2024-11-18 23:06:28.738952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.554 23:06:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:09.554 [2024-11-18 23:06:28.740842] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:10.493 "name": "raid_bdev1", 00:11:10.493 "uuid": "b1710738-1462-4625-9ed3-52d9a1842365", 00:11:10.493 "strip_size_kb": 0, 00:11:10.493 "state": "online", 00:11:10.493 "raid_level": "raid1", 00:11:10.493 "superblock": false, 00:11:10.493 "num_base_bdevs": 2, 00:11:10.493 "num_base_bdevs_discovered": 2, 00:11:10.493 "num_base_bdevs_operational": 2, 00:11:10.493 "process": { 00:11:10.493 "type": "rebuild", 00:11:10.493 "target": "spare", 00:11:10.493 "progress": { 00:11:10.493 "blocks": 20480, 00:11:10.493 "percent": 31 00:11:10.493 } 00:11:10.493 }, 00:11:10.493 "base_bdevs_list": [ 00:11:10.493 { 00:11:10.493 "name": "spare", 00:11:10.493 "uuid": "8397b659-c38e-548b-bba5-1a9e297de061", 00:11:10.493 "is_configured": true, 00:11:10.493 "data_offset": 0, 00:11:10.493 "data_size": 65536 00:11:10.493 }, 00:11:10.493 { 00:11:10.493 "name": "BaseBdev2", 00:11:10.493 "uuid": "a50f89ae-b4cd-542f-9457-7d3c73ea88c5", 00:11:10.493 "is_configured": true, 00:11:10.493 "data_offset": 0, 00:11:10.493 "data_size": 65536 00:11:10.493 } 00:11:10.493 ] 00:11:10.493 }' 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:10.493 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.753 [2024-11-18 23:06:29.905561] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:10.753 [2024-11-18 23:06:29.945487] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:10.753 [2024-11-18 23:06:29.945575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.753 [2024-11-18 23:06:29.945612] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:10.753 [2024-11-18 23:06:29.945620] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.753 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.754 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.754 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.754 23:06:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.754 23:06:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.754 23:06:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.754 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.754 "name": "raid_bdev1", 00:11:10.754 "uuid": "b1710738-1462-4625-9ed3-52d9a1842365", 00:11:10.754 "strip_size_kb": 0, 00:11:10.754 "state": "online", 00:11:10.754 "raid_level": "raid1", 00:11:10.754 "superblock": false, 00:11:10.754 "num_base_bdevs": 2, 00:11:10.754 "num_base_bdevs_discovered": 1, 00:11:10.754 "num_base_bdevs_operational": 1, 00:11:10.754 "base_bdevs_list": [ 00:11:10.754 { 00:11:10.754 "name": null, 00:11:10.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.754 "is_configured": false, 00:11:10.754 "data_offset": 0, 00:11:10.754 "data_size": 65536 00:11:10.754 }, 00:11:10.754 { 00:11:10.754 "name": "BaseBdev2", 00:11:10.754 "uuid": "a50f89ae-b4cd-542f-9457-7d3c73ea88c5", 00:11:10.754 "is_configured": true, 00:11:10.754 "data_offset": 0, 00:11:10.754 "data_size": 65536 00:11:10.754 } 00:11:10.754 ] 00:11:10.754 }' 00:11:10.754 23:06:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.754 23:06:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.014 "name": "raid_bdev1", 00:11:11.014 "uuid": "b1710738-1462-4625-9ed3-52d9a1842365", 00:11:11.014 "strip_size_kb": 0, 00:11:11.014 "state": "online", 00:11:11.014 "raid_level": "raid1", 00:11:11.014 "superblock": false, 00:11:11.014 "num_base_bdevs": 2, 00:11:11.014 "num_base_bdevs_discovered": 1, 00:11:11.014 "num_base_bdevs_operational": 1, 00:11:11.014 "base_bdevs_list": [ 00:11:11.014 { 00:11:11.014 "name": null, 00:11:11.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.014 "is_configured": false, 00:11:11.014 "data_offset": 0, 00:11:11.014 "data_size": 65536 00:11:11.014 }, 00:11:11.014 { 00:11:11.014 "name": "BaseBdev2", 00:11:11.014 "uuid": "a50f89ae-b4cd-542f-9457-7d3c73ea88c5", 00:11:11.014 "is_configured": true, 00:11:11.014 "data_offset": 0, 00:11:11.014 "data_size": 65536 00:11:11.014 } 00:11:11.014 ] 00:11:11.014 }' 00:11:11.014 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.278 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:11.278 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.278 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:11.278 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:11.278 23:06:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.278 23:06:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.278 [2024-11-18 23:06:30.473114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:11.278 [2024-11-18 23:06:30.477206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:11:11.278 23:06:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.278 23:06:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:11.278 [2024-11-18 23:06:30.479016] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:12.225 "name": "raid_bdev1", 00:11:12.225 "uuid": "b1710738-1462-4625-9ed3-52d9a1842365", 00:11:12.225 "strip_size_kb": 0, 00:11:12.225 "state": "online", 00:11:12.225 "raid_level": "raid1", 00:11:12.225 "superblock": false, 00:11:12.225 "num_base_bdevs": 2, 00:11:12.225 "num_base_bdevs_discovered": 2, 00:11:12.225 "num_base_bdevs_operational": 2, 00:11:12.225 "process": { 00:11:12.225 "type": "rebuild", 00:11:12.225 "target": "spare", 00:11:12.225 "progress": { 00:11:12.225 "blocks": 20480, 00:11:12.225 "percent": 31 00:11:12.225 } 00:11:12.225 }, 00:11:12.225 "base_bdevs_list": [ 00:11:12.225 { 00:11:12.225 "name": "spare", 00:11:12.225 "uuid": "8397b659-c38e-548b-bba5-1a9e297de061", 00:11:12.225 "is_configured": true, 00:11:12.225 "data_offset": 0, 00:11:12.225 "data_size": 65536 00:11:12.225 }, 00:11:12.225 { 00:11:12.225 "name": "BaseBdev2", 00:11:12.225 "uuid": "a50f89ae-b4cd-542f-9457-7d3c73ea88c5", 00:11:12.225 "is_configured": true, 00:11:12.225 "data_offset": 0, 00:11:12.225 "data_size": 65536 00:11:12.225 } 00:11:12.225 ] 00:11:12.225 }' 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:12.225 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=287 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:12.485 "name": "raid_bdev1", 00:11:12.485 "uuid": "b1710738-1462-4625-9ed3-52d9a1842365", 00:11:12.485 "strip_size_kb": 0, 00:11:12.485 "state": "online", 00:11:12.485 "raid_level": "raid1", 00:11:12.485 "superblock": false, 00:11:12.485 "num_base_bdevs": 2, 00:11:12.485 "num_base_bdevs_discovered": 2, 00:11:12.485 "num_base_bdevs_operational": 2, 00:11:12.485 "process": { 00:11:12.485 "type": "rebuild", 00:11:12.485 "target": "spare", 00:11:12.485 "progress": { 00:11:12.485 "blocks": 22528, 00:11:12.485 "percent": 34 00:11:12.485 } 00:11:12.485 }, 00:11:12.485 "base_bdevs_list": [ 00:11:12.485 { 00:11:12.485 "name": "spare", 00:11:12.485 "uuid": "8397b659-c38e-548b-bba5-1a9e297de061", 00:11:12.485 "is_configured": true, 00:11:12.485 "data_offset": 0, 00:11:12.485 "data_size": 65536 00:11:12.485 }, 00:11:12.485 { 00:11:12.485 "name": "BaseBdev2", 00:11:12.485 "uuid": "a50f89ae-b4cd-542f-9457-7d3c73ea88c5", 00:11:12.485 "is_configured": true, 00:11:12.485 "data_offset": 0, 00:11:12.485 "data_size": 65536 00:11:12.485 } 00:11:12.485 ] 00:11:12.485 }' 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:12.485 23:06:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.439 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.439 "name": "raid_bdev1", 00:11:13.439 "uuid": "b1710738-1462-4625-9ed3-52d9a1842365", 00:11:13.439 "strip_size_kb": 0, 00:11:13.439 "state": "online", 00:11:13.439 "raid_level": "raid1", 00:11:13.439 "superblock": false, 00:11:13.439 "num_base_bdevs": 2, 00:11:13.439 "num_base_bdevs_discovered": 2, 00:11:13.439 "num_base_bdevs_operational": 2, 00:11:13.439 "process": { 00:11:13.439 "type": "rebuild", 00:11:13.439 "target": "spare", 00:11:13.439 "progress": { 00:11:13.439 "blocks": 45056, 00:11:13.439 "percent": 68 00:11:13.439 } 00:11:13.439 }, 00:11:13.439 "base_bdevs_list": [ 00:11:13.439 { 00:11:13.439 "name": "spare", 00:11:13.439 "uuid": "8397b659-c38e-548b-bba5-1a9e297de061", 00:11:13.439 "is_configured": true, 00:11:13.439 "data_offset": 0, 00:11:13.439 "data_size": 65536 00:11:13.439 }, 00:11:13.439 { 00:11:13.440 "name": "BaseBdev2", 00:11:13.440 "uuid": "a50f89ae-b4cd-542f-9457-7d3c73ea88c5", 00:11:13.440 "is_configured": true, 00:11:13.440 "data_offset": 0, 00:11:13.440 "data_size": 65536 00:11:13.440 } 00:11:13.440 ] 00:11:13.440 }' 00:11:13.440 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.705 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:13.705 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.705 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:13.705 23:06:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:14.645 [2024-11-18 23:06:33.689667] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:14.645 [2024-11-18 23:06:33.689732] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:14.645 [2024-11-18 23:06:33.689774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.645 "name": "raid_bdev1", 00:11:14.645 "uuid": "b1710738-1462-4625-9ed3-52d9a1842365", 00:11:14.645 "strip_size_kb": 0, 00:11:14.645 "state": "online", 00:11:14.645 "raid_level": "raid1", 00:11:14.645 "superblock": false, 00:11:14.645 "num_base_bdevs": 2, 00:11:14.645 "num_base_bdevs_discovered": 2, 00:11:14.645 "num_base_bdevs_operational": 2, 00:11:14.645 "base_bdevs_list": [ 00:11:14.645 { 00:11:14.645 "name": "spare", 00:11:14.645 "uuid": "8397b659-c38e-548b-bba5-1a9e297de061", 00:11:14.645 "is_configured": true, 00:11:14.645 "data_offset": 0, 00:11:14.645 "data_size": 65536 00:11:14.645 }, 00:11:14.645 { 00:11:14.645 "name": "BaseBdev2", 00:11:14.645 "uuid": "a50f89ae-b4cd-542f-9457-7d3c73ea88c5", 00:11:14.645 "is_configured": true, 00:11:14.645 "data_offset": 0, 00:11:14.645 "data_size": 65536 00:11:14.645 } 00:11:14.645 ] 00:11:14.645 }' 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:14.645 23:06:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.645 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.905 "name": "raid_bdev1", 00:11:14.905 "uuid": "b1710738-1462-4625-9ed3-52d9a1842365", 00:11:14.905 "strip_size_kb": 0, 00:11:14.905 "state": "online", 00:11:14.905 "raid_level": "raid1", 00:11:14.905 "superblock": false, 00:11:14.905 "num_base_bdevs": 2, 00:11:14.905 "num_base_bdevs_discovered": 2, 00:11:14.905 "num_base_bdevs_operational": 2, 00:11:14.905 "base_bdevs_list": [ 00:11:14.905 { 00:11:14.905 "name": "spare", 00:11:14.905 "uuid": "8397b659-c38e-548b-bba5-1a9e297de061", 00:11:14.905 "is_configured": true, 00:11:14.905 "data_offset": 0, 00:11:14.905 "data_size": 65536 00:11:14.905 }, 00:11:14.905 { 00:11:14.905 "name": "BaseBdev2", 00:11:14.905 "uuid": "a50f89ae-b4cd-542f-9457-7d3c73ea88c5", 00:11:14.905 "is_configured": true, 00:11:14.905 "data_offset": 0, 00:11:14.905 "data_size": 65536 00:11:14.905 } 00:11:14.905 ] 00:11:14.905 }' 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.905 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.906 "name": "raid_bdev1", 00:11:14.906 "uuid": "b1710738-1462-4625-9ed3-52d9a1842365", 00:11:14.906 "strip_size_kb": 0, 00:11:14.906 "state": "online", 00:11:14.906 "raid_level": "raid1", 00:11:14.906 "superblock": false, 00:11:14.906 "num_base_bdevs": 2, 00:11:14.906 "num_base_bdevs_discovered": 2, 00:11:14.906 "num_base_bdevs_operational": 2, 00:11:14.906 "base_bdevs_list": [ 00:11:14.906 { 00:11:14.906 "name": "spare", 00:11:14.906 "uuid": "8397b659-c38e-548b-bba5-1a9e297de061", 00:11:14.906 "is_configured": true, 00:11:14.906 "data_offset": 0, 00:11:14.906 "data_size": 65536 00:11:14.906 }, 00:11:14.906 { 00:11:14.906 "name": "BaseBdev2", 00:11:14.906 "uuid": "a50f89ae-b4cd-542f-9457-7d3c73ea88c5", 00:11:14.906 "is_configured": true, 00:11:14.906 "data_offset": 0, 00:11:14.906 "data_size": 65536 00:11:14.906 } 00:11:14.906 ] 00:11:14.906 }' 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.906 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.164 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:15.164 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.164 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.164 [2024-11-18 23:06:34.536477] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.164 [2024-11-18 23:06:34.536547] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.164 [2024-11-18 23:06:34.536652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.164 [2024-11-18 23:06:34.536735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.164 [2024-11-18 23:06:34.536792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:15.425 /dev/nbd0 00:11:15.425 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.686 1+0 records in 00:11:15.686 1+0 records out 00:11:15.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263425 s, 15.5 MB/s 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:15.686 23:06:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:15.686 /dev/nbd1 00:11:15.686 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:15.686 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:15.686 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:15.686 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:15.686 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.686 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.686 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:15.686 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:15.686 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.686 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.686 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.686 1+0 records in 00:11:15.686 1+0 records out 00:11:15.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507673 s, 8.1 MB/s 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.946 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85965 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 85965 ']' 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 85965 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.247 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85965 00:11:16.521 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.521 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.521 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85965' 00:11:16.521 killing process with pid 85965 00:11:16.521 Received shutdown signal, test time was about 60.000000 seconds 00:11:16.521 00:11:16.521 Latency(us) 00:11:16.521 [2024-11-18T23:06:35.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.521 [2024-11-18T23:06:35.899Z] =================================================================================================================== 00:11:16.521 [2024-11-18T23:06:35.899Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:16.521 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 85965 00:11:16.521 [2024-11-18 23:06:35.600431] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.521 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 85965 00:11:16.521 [2024-11-18 23:06:35.631454] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.521 23:06:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:16.521 00:11:16.521 real 0m13.184s 00:11:16.521 user 0m15.230s 00:11:16.521 sys 0m2.872s 00:11:16.521 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.521 23:06:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.521 ************************************ 00:11:16.521 END TEST raid_rebuild_test 00:11:16.521 ************************************ 00:11:16.780 23:06:35 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:16.781 23:06:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:16.781 23:06:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.781 23:06:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.781 ************************************ 00:11:16.781 START TEST raid_rebuild_test_sb 00:11:16.781 ************************************ 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86360 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86360 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86360 ']' 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.781 23:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.781 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:16.781 Zero copy mechanism will not be used. 00:11:16.781 [2024-11-18 23:06:36.034738] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:16.781 [2024-11-18 23:06:36.034876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86360 ] 00:11:17.041 [2024-11-18 23:06:36.193135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.041 [2024-11-18 23:06:36.236956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.041 [2024-11-18 23:06:36.278397] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.041 [2024-11-18 23:06:36.278529] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.611 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.611 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:17.611 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:17.611 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:17.611 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.611 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.611 BaseBdev1_malloc 00:11:17.611 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.612 [2024-11-18 23:06:36.872234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:17.612 [2024-11-18 23:06:36.872305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.612 [2024-11-18 23:06:36.872330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:17.612 [2024-11-18 23:06:36.872349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.612 [2024-11-18 23:06:36.874446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.612 [2024-11-18 23:06:36.874478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.612 BaseBdev1 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.612 BaseBdev2_malloc 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.612 [2024-11-18 23:06:36.916915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:17.612 [2024-11-18 23:06:36.917018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.612 [2024-11-18 23:06:36.917063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:17.612 [2024-11-18 23:06:36.917084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.612 [2024-11-18 23:06:36.921769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.612 [2024-11-18 23:06:36.921840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.612 BaseBdev2 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.612 spare_malloc 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.612 spare_delay 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.612 [2024-11-18 23:06:36.959478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:17.612 [2024-11-18 23:06:36.959527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.612 [2024-11-18 23:06:36.959546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:17.612 [2024-11-18 23:06:36.959555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.612 [2024-11-18 23:06:36.961612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.612 [2024-11-18 23:06:36.961687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:17.612 spare 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.612 [2024-11-18 23:06:36.971495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.612 [2024-11-18 23:06:36.973265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.612 [2024-11-18 23:06:36.973423] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:17.612 [2024-11-18 23:06:36.973436] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:17.612 [2024-11-18 23:06:36.973662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:17.612 [2024-11-18 23:06:36.973800] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:17.612 [2024-11-18 23:06:36.973811] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:17.612 [2024-11-18 23:06:36.973931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.612 23:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.871 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.871 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.871 "name": "raid_bdev1", 00:11:17.871 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:17.871 "strip_size_kb": 0, 00:11:17.871 "state": "online", 00:11:17.871 "raid_level": "raid1", 00:11:17.871 "superblock": true, 00:11:17.871 "num_base_bdevs": 2, 00:11:17.871 "num_base_bdevs_discovered": 2, 00:11:17.871 "num_base_bdevs_operational": 2, 00:11:17.871 "base_bdevs_list": [ 00:11:17.871 { 00:11:17.871 "name": "BaseBdev1", 00:11:17.871 "uuid": "4311264f-7796-5146-862f-e30354d8cd7d", 00:11:17.871 "is_configured": true, 00:11:17.871 "data_offset": 2048, 00:11:17.871 "data_size": 63488 00:11:17.871 }, 00:11:17.871 { 00:11:17.871 "name": "BaseBdev2", 00:11:17.871 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:17.871 "is_configured": true, 00:11:17.871 "data_offset": 2048, 00:11:17.871 "data_size": 63488 00:11:17.871 } 00:11:17.871 ] 00:11:17.871 }' 00:11:17.871 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.871 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.131 [2024-11-18 23:06:37.454967] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:18.131 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:18.391 [2024-11-18 23:06:37.686427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:18.391 /dev/nbd0 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.391 1+0 records in 00:11:18.391 1+0 records out 00:11:18.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391435 s, 10.5 MB/s 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:18.391 23:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:23.682 63488+0 records in 00:11:23.682 63488+0 records out 00:11:23.682 32505856 bytes (33 MB, 31 MiB) copied, 4.28137 s, 7.6 MB/s 00:11:23.682 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:23.682 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:23.682 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:23.682 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:23.682 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:23.682 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.682 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:23.682 [2024-11-18 23:06:42.238546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.682 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:23.682 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.683 [2024-11-18 23:06:42.290457] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.683 "name": "raid_bdev1", 00:11:23.683 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:23.683 "strip_size_kb": 0, 00:11:23.683 "state": "online", 00:11:23.683 "raid_level": "raid1", 00:11:23.683 "superblock": true, 00:11:23.683 "num_base_bdevs": 2, 00:11:23.683 "num_base_bdevs_discovered": 1, 00:11:23.683 "num_base_bdevs_operational": 1, 00:11:23.683 "base_bdevs_list": [ 00:11:23.683 { 00:11:23.683 "name": null, 00:11:23.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.683 "is_configured": false, 00:11:23.683 "data_offset": 0, 00:11:23.683 "data_size": 63488 00:11:23.683 }, 00:11:23.683 { 00:11:23.683 "name": "BaseBdev2", 00:11:23.683 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:23.683 "is_configured": true, 00:11:23.683 "data_offset": 2048, 00:11:23.683 "data_size": 63488 00:11:23.683 } 00:11:23.683 ] 00:11:23.683 }' 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.683 [2024-11-18 23:06:42.701709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:23.683 [2024-11-18 23:06:42.705906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.683 [2024-11-18 23:06:42.707776] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:23.683 23:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:24.622 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:24.622 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:24.623 "name": "raid_bdev1", 00:11:24.623 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:24.623 "strip_size_kb": 0, 00:11:24.623 "state": "online", 00:11:24.623 "raid_level": "raid1", 00:11:24.623 "superblock": true, 00:11:24.623 "num_base_bdevs": 2, 00:11:24.623 "num_base_bdevs_discovered": 2, 00:11:24.623 "num_base_bdevs_operational": 2, 00:11:24.623 "process": { 00:11:24.623 "type": "rebuild", 00:11:24.623 "target": "spare", 00:11:24.623 "progress": { 00:11:24.623 "blocks": 20480, 00:11:24.623 "percent": 32 00:11:24.623 } 00:11:24.623 }, 00:11:24.623 "base_bdevs_list": [ 00:11:24.623 { 00:11:24.623 "name": "spare", 00:11:24.623 "uuid": "27c62d08-7a99-59de-828c-b5722c379f00", 00:11:24.623 "is_configured": true, 00:11:24.623 "data_offset": 2048, 00:11:24.623 "data_size": 63488 00:11:24.623 }, 00:11:24.623 { 00:11:24.623 "name": "BaseBdev2", 00:11:24.623 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:24.623 "is_configured": true, 00:11:24.623 "data_offset": 2048, 00:11:24.623 "data_size": 63488 00:11:24.623 } 00:11:24.623 ] 00:11:24.623 }' 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.623 [2024-11-18 23:06:43.856600] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:24.623 [2024-11-18 23:06:43.912485] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:24.623 [2024-11-18 23:06:43.912533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.623 [2024-11-18 23:06:43.912550] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:24.623 [2024-11-18 23:06:43.912557] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.623 "name": "raid_bdev1", 00:11:24.623 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:24.623 "strip_size_kb": 0, 00:11:24.623 "state": "online", 00:11:24.623 "raid_level": "raid1", 00:11:24.623 "superblock": true, 00:11:24.623 "num_base_bdevs": 2, 00:11:24.623 "num_base_bdevs_discovered": 1, 00:11:24.623 "num_base_bdevs_operational": 1, 00:11:24.623 "base_bdevs_list": [ 00:11:24.623 { 00:11:24.623 "name": null, 00:11:24.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.623 "is_configured": false, 00:11:24.623 "data_offset": 0, 00:11:24.623 "data_size": 63488 00:11:24.623 }, 00:11:24.623 { 00:11:24.623 "name": "BaseBdev2", 00:11:24.623 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:24.623 "is_configured": true, 00:11:24.623 "data_offset": 2048, 00:11:24.623 "data_size": 63488 00:11:24.623 } 00:11:24.623 ] 00:11:24.623 }' 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.623 23:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.200 "name": "raid_bdev1", 00:11:25.200 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:25.200 "strip_size_kb": 0, 00:11:25.200 "state": "online", 00:11:25.200 "raid_level": "raid1", 00:11:25.200 "superblock": true, 00:11:25.200 "num_base_bdevs": 2, 00:11:25.200 "num_base_bdevs_discovered": 1, 00:11:25.200 "num_base_bdevs_operational": 1, 00:11:25.200 "base_bdevs_list": [ 00:11:25.200 { 00:11:25.200 "name": null, 00:11:25.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.200 "is_configured": false, 00:11:25.200 "data_offset": 0, 00:11:25.200 "data_size": 63488 00:11:25.200 }, 00:11:25.200 { 00:11:25.200 "name": "BaseBdev2", 00:11:25.200 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:25.200 "is_configured": true, 00:11:25.200 "data_offset": 2048, 00:11:25.200 "data_size": 63488 00:11:25.200 } 00:11:25.200 ] 00:11:25.200 }' 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.200 [2024-11-18 23:06:44.499884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:25.200 [2024-11-18 23:06:44.504077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.200 23:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:25.200 [2024-11-18 23:06:44.505931] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:26.139 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:26.139 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:26.139 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:26.139 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:26.139 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:26.398 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:26.399 "name": "raid_bdev1", 00:11:26.399 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:26.399 "strip_size_kb": 0, 00:11:26.399 "state": "online", 00:11:26.399 "raid_level": "raid1", 00:11:26.399 "superblock": true, 00:11:26.399 "num_base_bdevs": 2, 00:11:26.399 "num_base_bdevs_discovered": 2, 00:11:26.399 "num_base_bdevs_operational": 2, 00:11:26.399 "process": { 00:11:26.399 "type": "rebuild", 00:11:26.399 "target": "spare", 00:11:26.399 "progress": { 00:11:26.399 "blocks": 20480, 00:11:26.399 "percent": 32 00:11:26.399 } 00:11:26.399 }, 00:11:26.399 "base_bdevs_list": [ 00:11:26.399 { 00:11:26.399 "name": "spare", 00:11:26.399 "uuid": "27c62d08-7a99-59de-828c-b5722c379f00", 00:11:26.399 "is_configured": true, 00:11:26.399 "data_offset": 2048, 00:11:26.399 "data_size": 63488 00:11:26.399 }, 00:11:26.399 { 00:11:26.399 "name": "BaseBdev2", 00:11:26.399 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:26.399 "is_configured": true, 00:11:26.399 "data_offset": 2048, 00:11:26.399 "data_size": 63488 00:11:26.399 } 00:11:26.399 ] 00:11:26.399 }' 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:26.399 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=301 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:26.399 "name": "raid_bdev1", 00:11:26.399 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:26.399 "strip_size_kb": 0, 00:11:26.399 "state": "online", 00:11:26.399 "raid_level": "raid1", 00:11:26.399 "superblock": true, 00:11:26.399 "num_base_bdevs": 2, 00:11:26.399 "num_base_bdevs_discovered": 2, 00:11:26.399 "num_base_bdevs_operational": 2, 00:11:26.399 "process": { 00:11:26.399 "type": "rebuild", 00:11:26.399 "target": "spare", 00:11:26.399 "progress": { 00:11:26.399 "blocks": 22528, 00:11:26.399 "percent": 35 00:11:26.399 } 00:11:26.399 }, 00:11:26.399 "base_bdevs_list": [ 00:11:26.399 { 00:11:26.399 "name": "spare", 00:11:26.399 "uuid": "27c62d08-7a99-59de-828c-b5722c379f00", 00:11:26.399 "is_configured": true, 00:11:26.399 "data_offset": 2048, 00:11:26.399 "data_size": 63488 00:11:26.399 }, 00:11:26.399 { 00:11:26.399 "name": "BaseBdev2", 00:11:26.399 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:26.399 "is_configured": true, 00:11:26.399 "data_offset": 2048, 00:11:26.399 "data_size": 63488 00:11:26.399 } 00:11:26.399 ] 00:11:26.399 }' 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:26.399 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.657 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:26.657 23:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.596 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.596 "name": "raid_bdev1", 00:11:27.596 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:27.596 "strip_size_kb": 0, 00:11:27.596 "state": "online", 00:11:27.596 "raid_level": "raid1", 00:11:27.596 "superblock": true, 00:11:27.596 "num_base_bdevs": 2, 00:11:27.596 "num_base_bdevs_discovered": 2, 00:11:27.596 "num_base_bdevs_operational": 2, 00:11:27.596 "process": { 00:11:27.596 "type": "rebuild", 00:11:27.596 "target": "spare", 00:11:27.596 "progress": { 00:11:27.596 "blocks": 45056, 00:11:27.596 "percent": 70 00:11:27.596 } 00:11:27.596 }, 00:11:27.596 "base_bdevs_list": [ 00:11:27.596 { 00:11:27.596 "name": "spare", 00:11:27.596 "uuid": "27c62d08-7a99-59de-828c-b5722c379f00", 00:11:27.596 "is_configured": true, 00:11:27.596 "data_offset": 2048, 00:11:27.596 "data_size": 63488 00:11:27.596 }, 00:11:27.596 { 00:11:27.596 "name": "BaseBdev2", 00:11:27.596 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:27.596 "is_configured": true, 00:11:27.597 "data_offset": 2048, 00:11:27.597 "data_size": 63488 00:11:27.597 } 00:11:27.597 ] 00:11:27.597 }' 00:11:27.597 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.597 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:27.597 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.597 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:27.597 23:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:28.536 [2024-11-18 23:06:47.616337] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:28.536 [2024-11-18 23:06:47.616485] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:28.536 [2024-11-18 23:06:47.616598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.796 23:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:28.796 23:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:28.796 23:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.796 23:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:28.797 23:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:28.797 23:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.797 23:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.797 23:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.797 23:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.797 23:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.797 23:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.797 23:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.797 "name": "raid_bdev1", 00:11:28.797 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:28.797 "strip_size_kb": 0, 00:11:28.797 "state": "online", 00:11:28.797 "raid_level": "raid1", 00:11:28.797 "superblock": true, 00:11:28.797 "num_base_bdevs": 2, 00:11:28.797 "num_base_bdevs_discovered": 2, 00:11:28.797 "num_base_bdevs_operational": 2, 00:11:28.797 "base_bdevs_list": [ 00:11:28.797 { 00:11:28.797 "name": "spare", 00:11:28.797 "uuid": "27c62d08-7a99-59de-828c-b5722c379f00", 00:11:28.797 "is_configured": true, 00:11:28.797 "data_offset": 2048, 00:11:28.797 "data_size": 63488 00:11:28.797 }, 00:11:28.797 { 00:11:28.797 "name": "BaseBdev2", 00:11:28.797 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:28.797 "is_configured": true, 00:11:28.797 "data_offset": 2048, 00:11:28.797 "data_size": 63488 00:11:28.797 } 00:11:28.797 ] 00:11:28.797 }' 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.797 "name": "raid_bdev1", 00:11:28.797 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:28.797 "strip_size_kb": 0, 00:11:28.797 "state": "online", 00:11:28.797 "raid_level": "raid1", 00:11:28.797 "superblock": true, 00:11:28.797 "num_base_bdevs": 2, 00:11:28.797 "num_base_bdevs_discovered": 2, 00:11:28.797 "num_base_bdevs_operational": 2, 00:11:28.797 "base_bdevs_list": [ 00:11:28.797 { 00:11:28.797 "name": "spare", 00:11:28.797 "uuid": "27c62d08-7a99-59de-828c-b5722c379f00", 00:11:28.797 "is_configured": true, 00:11:28.797 "data_offset": 2048, 00:11:28.797 "data_size": 63488 00:11:28.797 }, 00:11:28.797 { 00:11:28.797 "name": "BaseBdev2", 00:11:28.797 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:28.797 "is_configured": true, 00:11:28.797 "data_offset": 2048, 00:11:28.797 "data_size": 63488 00:11:28.797 } 00:11:28.797 ] 00:11:28.797 }' 00:11:28.797 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.057 "name": "raid_bdev1", 00:11:29.057 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:29.057 "strip_size_kb": 0, 00:11:29.057 "state": "online", 00:11:29.057 "raid_level": "raid1", 00:11:29.057 "superblock": true, 00:11:29.057 "num_base_bdevs": 2, 00:11:29.057 "num_base_bdevs_discovered": 2, 00:11:29.057 "num_base_bdevs_operational": 2, 00:11:29.057 "base_bdevs_list": [ 00:11:29.057 { 00:11:29.057 "name": "spare", 00:11:29.057 "uuid": "27c62d08-7a99-59de-828c-b5722c379f00", 00:11:29.057 "is_configured": true, 00:11:29.057 "data_offset": 2048, 00:11:29.057 "data_size": 63488 00:11:29.057 }, 00:11:29.057 { 00:11:29.057 "name": "BaseBdev2", 00:11:29.057 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:29.057 "is_configured": true, 00:11:29.057 "data_offset": 2048, 00:11:29.057 "data_size": 63488 00:11:29.057 } 00:11:29.057 ] 00:11:29.057 }' 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.057 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.317 [2024-11-18 23:06:48.635334] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.317 [2024-11-18 23:06:48.635407] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.317 [2024-11-18 23:06:48.635527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.317 [2024-11-18 23:06:48.635622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.317 [2024-11-18 23:06:48.635678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:29.317 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:29.587 /dev/nbd0 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.587 1+0 records in 00:11:29.587 1+0 records out 00:11:29.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520845 s, 7.9 MB/s 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:29.587 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.588 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:29.588 23:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:29.588 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.588 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:29.588 23:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:29.850 /dev/nbd1 00:11:29.850 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:29.850 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:29.850 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:29.850 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:29.850 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:29.850 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:29.850 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:29.850 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:29.850 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:29.850 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:29.851 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.851 1+0 records in 00:11:29.851 1+0 records out 00:11:29.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316667 s, 12.9 MB/s 00:11:29.851 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.851 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:29.851 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.851 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:29.851 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:29.851 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.851 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:29.851 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.134 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.412 [2024-11-18 23:06:49.684261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:30.412 [2024-11-18 23:06:49.684333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.412 [2024-11-18 23:06:49.684356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:30.412 [2024-11-18 23:06:49.684369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.412 [2024-11-18 23:06:49.686436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.412 [2024-11-18 23:06:49.686462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:30.412 [2024-11-18 23:06:49.686541] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:30.412 [2024-11-18 23:06:49.686599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:30.412 [2024-11-18 23:06:49.686710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.412 spare 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.412 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.412 [2024-11-18 23:06:49.786615] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:30.412 [2024-11-18 23:06:49.786640] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:30.412 [2024-11-18 23:06:49.786909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:11:30.412 [2024-11-18 23:06:49.787058] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:30.412 [2024-11-18 23:06:49.787071] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:30.412 [2024-11-18 23:06:49.787208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.673 "name": "raid_bdev1", 00:11:30.673 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:30.673 "strip_size_kb": 0, 00:11:30.673 "state": "online", 00:11:30.673 "raid_level": "raid1", 00:11:30.673 "superblock": true, 00:11:30.673 "num_base_bdevs": 2, 00:11:30.673 "num_base_bdevs_discovered": 2, 00:11:30.673 "num_base_bdevs_operational": 2, 00:11:30.673 "base_bdevs_list": [ 00:11:30.673 { 00:11:30.673 "name": "spare", 00:11:30.673 "uuid": "27c62d08-7a99-59de-828c-b5722c379f00", 00:11:30.673 "is_configured": true, 00:11:30.673 "data_offset": 2048, 00:11:30.673 "data_size": 63488 00:11:30.673 }, 00:11:30.673 { 00:11:30.673 "name": "BaseBdev2", 00:11:30.673 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:30.673 "is_configured": true, 00:11:30.673 "data_offset": 2048, 00:11:30.673 "data_size": 63488 00:11:30.673 } 00:11:30.673 ] 00:11:30.673 }' 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.673 23:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.933 "name": "raid_bdev1", 00:11:30.933 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:30.933 "strip_size_kb": 0, 00:11:30.933 "state": "online", 00:11:30.933 "raid_level": "raid1", 00:11:30.933 "superblock": true, 00:11:30.933 "num_base_bdevs": 2, 00:11:30.933 "num_base_bdevs_discovered": 2, 00:11:30.933 "num_base_bdevs_operational": 2, 00:11:30.933 "base_bdevs_list": [ 00:11:30.933 { 00:11:30.933 "name": "spare", 00:11:30.933 "uuid": "27c62d08-7a99-59de-828c-b5722c379f00", 00:11:30.933 "is_configured": true, 00:11:30.933 "data_offset": 2048, 00:11:30.933 "data_size": 63488 00:11:30.933 }, 00:11:30.933 { 00:11:30.933 "name": "BaseBdev2", 00:11:30.933 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:30.933 "is_configured": true, 00:11:30.933 "data_offset": 2048, 00:11:30.933 "data_size": 63488 00:11:30.933 } 00:11:30.933 ] 00:11:30.933 }' 00:11:30.933 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.192 [2024-11-18 23:06:50.443329] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.192 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.192 "name": "raid_bdev1", 00:11:31.192 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:31.192 "strip_size_kb": 0, 00:11:31.192 "state": "online", 00:11:31.192 "raid_level": "raid1", 00:11:31.192 "superblock": true, 00:11:31.192 "num_base_bdevs": 2, 00:11:31.192 "num_base_bdevs_discovered": 1, 00:11:31.192 "num_base_bdevs_operational": 1, 00:11:31.192 "base_bdevs_list": [ 00:11:31.192 { 00:11:31.192 "name": null, 00:11:31.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.192 "is_configured": false, 00:11:31.193 "data_offset": 0, 00:11:31.193 "data_size": 63488 00:11:31.193 }, 00:11:31.193 { 00:11:31.193 "name": "BaseBdev2", 00:11:31.193 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:31.193 "is_configured": true, 00:11:31.193 "data_offset": 2048, 00:11:31.193 "data_size": 63488 00:11:31.193 } 00:11:31.193 ] 00:11:31.193 }' 00:11:31.193 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.193 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.766 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:31.766 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.766 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.766 [2024-11-18 23:06:50.854733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:31.766 [2024-11-18 23:06:50.854916] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:31.766 [2024-11-18 23:06:50.854930] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:31.767 [2024-11-18 23:06:50.854969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:31.767 [2024-11-18 23:06:50.858968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:11:31.767 23:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.767 23:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:31.767 [2024-11-18 23:06:50.860847] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.705 "name": "raid_bdev1", 00:11:32.705 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:32.705 "strip_size_kb": 0, 00:11:32.705 "state": "online", 00:11:32.705 "raid_level": "raid1", 00:11:32.705 "superblock": true, 00:11:32.705 "num_base_bdevs": 2, 00:11:32.705 "num_base_bdevs_discovered": 2, 00:11:32.705 "num_base_bdevs_operational": 2, 00:11:32.705 "process": { 00:11:32.705 "type": "rebuild", 00:11:32.705 "target": "spare", 00:11:32.705 "progress": { 00:11:32.705 "blocks": 20480, 00:11:32.705 "percent": 32 00:11:32.705 } 00:11:32.705 }, 00:11:32.705 "base_bdevs_list": [ 00:11:32.705 { 00:11:32.705 "name": "spare", 00:11:32.705 "uuid": "27c62d08-7a99-59de-828c-b5722c379f00", 00:11:32.705 "is_configured": true, 00:11:32.705 "data_offset": 2048, 00:11:32.705 "data_size": 63488 00:11:32.705 }, 00:11:32.705 { 00:11:32.705 "name": "BaseBdev2", 00:11:32.705 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:32.705 "is_configured": true, 00:11:32.705 "data_offset": 2048, 00:11:32.705 "data_size": 63488 00:11:32.705 } 00:11:32.705 ] 00:11:32.705 }' 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.705 23:06:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.705 [2024-11-18 23:06:51.973753] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:32.705 [2024-11-18 23:06:52.064837] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:32.705 [2024-11-18 23:06:52.064935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.705 [2024-11-18 23:06:52.064989] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:32.705 [2024-11-18 23:06:52.065009] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.705 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.977 23:06:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.977 23:06:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.977 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.977 23:06:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.977 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.977 "name": "raid_bdev1", 00:11:32.977 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:32.977 "strip_size_kb": 0, 00:11:32.977 "state": "online", 00:11:32.977 "raid_level": "raid1", 00:11:32.977 "superblock": true, 00:11:32.977 "num_base_bdevs": 2, 00:11:32.977 "num_base_bdevs_discovered": 1, 00:11:32.977 "num_base_bdevs_operational": 1, 00:11:32.977 "base_bdevs_list": [ 00:11:32.977 { 00:11:32.977 "name": null, 00:11:32.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.977 "is_configured": false, 00:11:32.977 "data_offset": 0, 00:11:32.977 "data_size": 63488 00:11:32.977 }, 00:11:32.977 { 00:11:32.977 "name": "BaseBdev2", 00:11:32.977 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:32.977 "is_configured": true, 00:11:32.977 "data_offset": 2048, 00:11:32.977 "data_size": 63488 00:11:32.977 } 00:11:32.977 ] 00:11:32.977 }' 00:11:32.977 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.977 23:06:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.238 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:33.238 23:06:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.238 23:06:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.238 [2024-11-18 23:06:52.452499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:33.238 [2024-11-18 23:06:52.452626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.238 [2024-11-18 23:06:52.452652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:33.238 [2024-11-18 23:06:52.452662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.238 [2024-11-18 23:06:52.453083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.238 [2024-11-18 23:06:52.453101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:33.238 [2024-11-18 23:06:52.453181] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:33.238 [2024-11-18 23:06:52.453192] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:33.238 [2024-11-18 23:06:52.453207] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:33.238 [2024-11-18 23:06:52.453227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:33.238 [2024-11-18 23:06:52.457032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:11:33.238 spare 00:11:33.238 23:06:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.238 23:06:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:33.238 [2024-11-18 23:06:52.458904] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:34.178 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:34.178 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.178 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:34.178 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:34.178 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.179 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.179 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.179 23:06:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.179 23:06:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.179 23:06:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.179 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.179 "name": "raid_bdev1", 00:11:34.179 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:34.179 "strip_size_kb": 0, 00:11:34.179 "state": "online", 00:11:34.179 "raid_level": "raid1", 00:11:34.179 "superblock": true, 00:11:34.179 "num_base_bdevs": 2, 00:11:34.179 "num_base_bdevs_discovered": 2, 00:11:34.179 "num_base_bdevs_operational": 2, 00:11:34.179 "process": { 00:11:34.179 "type": "rebuild", 00:11:34.179 "target": "spare", 00:11:34.179 "progress": { 00:11:34.179 "blocks": 20480, 00:11:34.179 "percent": 32 00:11:34.179 } 00:11:34.179 }, 00:11:34.179 "base_bdevs_list": [ 00:11:34.179 { 00:11:34.179 "name": "spare", 00:11:34.179 "uuid": "27c62d08-7a99-59de-828c-b5722c379f00", 00:11:34.179 "is_configured": true, 00:11:34.179 "data_offset": 2048, 00:11:34.179 "data_size": 63488 00:11:34.179 }, 00:11:34.179 { 00:11:34.179 "name": "BaseBdev2", 00:11:34.179 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:34.179 "is_configured": true, 00:11:34.179 "data_offset": 2048, 00:11:34.179 "data_size": 63488 00:11:34.179 } 00:11:34.179 ] 00:11:34.179 }' 00:11:34.179 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.179 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:34.179 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.448 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:34.448 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:34.448 23:06:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.448 23:06:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.448 [2024-11-18 23:06:53.595885] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:34.449 [2024-11-18 23:06:53.662819] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:34.449 [2024-11-18 23:06:53.662935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.449 [2024-11-18 23:06:53.662952] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:34.449 [2024-11-18 23:06:53.662961] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.449 "name": "raid_bdev1", 00:11:34.449 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:34.449 "strip_size_kb": 0, 00:11:34.449 "state": "online", 00:11:34.449 "raid_level": "raid1", 00:11:34.449 "superblock": true, 00:11:34.449 "num_base_bdevs": 2, 00:11:34.449 "num_base_bdevs_discovered": 1, 00:11:34.449 "num_base_bdevs_operational": 1, 00:11:34.449 "base_bdevs_list": [ 00:11:34.449 { 00:11:34.449 "name": null, 00:11:34.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.449 "is_configured": false, 00:11:34.449 "data_offset": 0, 00:11:34.449 "data_size": 63488 00:11:34.449 }, 00:11:34.449 { 00:11:34.449 "name": "BaseBdev2", 00:11:34.449 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:34.449 "is_configured": true, 00:11:34.449 "data_offset": 2048, 00:11:34.449 "data_size": 63488 00:11:34.449 } 00:11:34.449 ] 00:11:34.449 }' 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.449 23:06:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:35.020 "name": "raid_bdev1", 00:11:35.020 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:35.020 "strip_size_kb": 0, 00:11:35.020 "state": "online", 00:11:35.020 "raid_level": "raid1", 00:11:35.020 "superblock": true, 00:11:35.020 "num_base_bdevs": 2, 00:11:35.020 "num_base_bdevs_discovered": 1, 00:11:35.020 "num_base_bdevs_operational": 1, 00:11:35.020 "base_bdevs_list": [ 00:11:35.020 { 00:11:35.020 "name": null, 00:11:35.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.020 "is_configured": false, 00:11:35.020 "data_offset": 0, 00:11:35.020 "data_size": 63488 00:11:35.020 }, 00:11:35.020 { 00:11:35.020 "name": "BaseBdev2", 00:11:35.020 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:35.020 "is_configured": true, 00:11:35.020 "data_offset": 2048, 00:11:35.020 "data_size": 63488 00:11:35.020 } 00:11:35.020 ] 00:11:35.020 }' 00:11:35.020 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.021 [2024-11-18 23:06:54.290055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:35.021 [2024-11-18 23:06:54.290159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.021 [2024-11-18 23:06:54.290183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:35.021 [2024-11-18 23:06:54.290193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.021 [2024-11-18 23:06:54.290603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.021 [2024-11-18 23:06:54.290625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:35.021 [2024-11-18 23:06:54.290693] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:35.021 [2024-11-18 23:06:54.290710] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:35.021 [2024-11-18 23:06:54.290717] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:35.021 [2024-11-18 23:06:54.290727] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:35.021 BaseBdev1 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.021 23:06:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.961 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.221 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.221 "name": "raid_bdev1", 00:11:36.221 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:36.221 "strip_size_kb": 0, 00:11:36.221 "state": "online", 00:11:36.221 "raid_level": "raid1", 00:11:36.221 "superblock": true, 00:11:36.221 "num_base_bdevs": 2, 00:11:36.221 "num_base_bdevs_discovered": 1, 00:11:36.221 "num_base_bdevs_operational": 1, 00:11:36.221 "base_bdevs_list": [ 00:11:36.221 { 00:11:36.221 "name": null, 00:11:36.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.221 "is_configured": false, 00:11:36.221 "data_offset": 0, 00:11:36.221 "data_size": 63488 00:11:36.221 }, 00:11:36.221 { 00:11:36.221 "name": "BaseBdev2", 00:11:36.221 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:36.221 "is_configured": true, 00:11:36.221 "data_offset": 2048, 00:11:36.221 "data_size": 63488 00:11:36.221 } 00:11:36.221 ] 00:11:36.221 }' 00:11:36.221 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.221 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.481 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:36.481 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.481 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:36.481 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:36.481 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.481 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.481 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.481 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.481 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.481 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.481 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.481 "name": "raid_bdev1", 00:11:36.481 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:36.481 "strip_size_kb": 0, 00:11:36.481 "state": "online", 00:11:36.481 "raid_level": "raid1", 00:11:36.481 "superblock": true, 00:11:36.481 "num_base_bdevs": 2, 00:11:36.481 "num_base_bdevs_discovered": 1, 00:11:36.481 "num_base_bdevs_operational": 1, 00:11:36.481 "base_bdevs_list": [ 00:11:36.481 { 00:11:36.481 "name": null, 00:11:36.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.481 "is_configured": false, 00:11:36.481 "data_offset": 0, 00:11:36.481 "data_size": 63488 00:11:36.481 }, 00:11:36.481 { 00:11:36.481 "name": "BaseBdev2", 00:11:36.481 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:36.481 "is_configured": true, 00:11:36.481 "data_offset": 2048, 00:11:36.481 "data_size": 63488 00:11:36.481 } 00:11:36.481 ] 00:11:36.481 }' 00:11:36.482 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.482 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:36.482 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.747 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.748 [2024-11-18 23:06:55.903358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.748 [2024-11-18 23:06:55.903580] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:36.748 [2024-11-18 23:06:55.903638] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:36.748 request: 00:11:36.748 { 00:11:36.748 "base_bdev": "BaseBdev1", 00:11:36.748 "raid_bdev": "raid_bdev1", 00:11:36.748 "method": "bdev_raid_add_base_bdev", 00:11:36.748 "req_id": 1 00:11:36.748 } 00:11:36.748 Got JSON-RPC error response 00:11:36.748 response: 00:11:36.748 { 00:11:36.748 "code": -22, 00:11:36.748 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:36.748 } 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:36.748 23:06:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.688 "name": "raid_bdev1", 00:11:37.688 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:37.688 "strip_size_kb": 0, 00:11:37.688 "state": "online", 00:11:37.688 "raid_level": "raid1", 00:11:37.688 "superblock": true, 00:11:37.688 "num_base_bdevs": 2, 00:11:37.688 "num_base_bdevs_discovered": 1, 00:11:37.688 "num_base_bdevs_operational": 1, 00:11:37.688 "base_bdevs_list": [ 00:11:37.688 { 00:11:37.688 "name": null, 00:11:37.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.688 "is_configured": false, 00:11:37.688 "data_offset": 0, 00:11:37.688 "data_size": 63488 00:11:37.688 }, 00:11:37.688 { 00:11:37.688 "name": "BaseBdev2", 00:11:37.688 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:37.688 "is_configured": true, 00:11:37.688 "data_offset": 2048, 00:11:37.688 "data_size": 63488 00:11:37.688 } 00:11:37.688 ] 00:11:37.688 }' 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.688 23:06:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.259 "name": "raid_bdev1", 00:11:38.259 "uuid": "1dd5552d-e9e6-4e74-98a8-01860ad9dd0c", 00:11:38.259 "strip_size_kb": 0, 00:11:38.259 "state": "online", 00:11:38.259 "raid_level": "raid1", 00:11:38.259 "superblock": true, 00:11:38.259 "num_base_bdevs": 2, 00:11:38.259 "num_base_bdevs_discovered": 1, 00:11:38.259 "num_base_bdevs_operational": 1, 00:11:38.259 "base_bdevs_list": [ 00:11:38.259 { 00:11:38.259 "name": null, 00:11:38.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.259 "is_configured": false, 00:11:38.259 "data_offset": 0, 00:11:38.259 "data_size": 63488 00:11:38.259 }, 00:11:38.259 { 00:11:38.259 "name": "BaseBdev2", 00:11:38.259 "uuid": "12750bb8-d8e8-53d1-937f-0d1bc0f98950", 00:11:38.259 "is_configured": true, 00:11:38.259 "data_offset": 2048, 00:11:38.259 "data_size": 63488 00:11:38.259 } 00:11:38.259 ] 00:11:38.259 }' 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86360 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86360 ']' 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86360 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86360 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:38.259 killing process with pid 86360 00:11:38.259 Received shutdown signal, test time was about 60.000000 seconds 00:11:38.259 00:11:38.259 Latency(us) 00:11:38.259 [2024-11-18T23:06:57.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.259 [2024-11-18T23:06:57.637Z] =================================================================================================================== 00:11:38.259 [2024-11-18T23:06:57.637Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86360' 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86360 00:11:38.259 [2024-11-18 23:06:57.523583] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.259 [2024-11-18 23:06:57.523703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.259 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86360 00:11:38.259 [2024-11-18 23:06:57.523753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.259 [2024-11-18 23:06:57.523762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:11:38.259 [2024-11-18 23:06:57.554718] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:38.520 00:11:38.520 real 0m21.848s 00:11:38.520 user 0m26.498s 00:11:38.520 sys 0m3.897s 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.520 ************************************ 00:11:38.520 END TEST raid_rebuild_test_sb 00:11:38.520 ************************************ 00:11:38.520 23:06:57 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:38.520 23:06:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:38.520 23:06:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.520 23:06:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.520 ************************************ 00:11:38.520 START TEST raid_rebuild_test_io 00:11:38.520 ************************************ 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:38.520 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87072 00:11:38.521 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:38.521 23:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87072 00:11:38.521 23:06:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87072 ']' 00:11:38.521 23:06:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.521 23:06:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.521 23:06:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.521 23:06:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.521 23:06:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.781 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:38.781 Zero copy mechanism will not be used. 00:11:38.781 [2024-11-18 23:06:57.959401] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:38.781 [2024-11-18 23:06:57.959516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87072 ] 00:11:38.781 [2024-11-18 23:06:58.102263] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.781 [2024-11-18 23:06:58.144872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.041 [2024-11-18 23:06:58.186938] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.041 [2024-11-18 23:06:58.187059] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.612 BaseBdev1_malloc 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.612 [2024-11-18 23:06:58.797487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:39.612 [2024-11-18 23:06:58.797569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.612 [2024-11-18 23:06:58.797595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:39.612 [2024-11-18 23:06:58.797609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.612 [2024-11-18 23:06:58.799675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.612 [2024-11-18 23:06:58.799712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:39.612 BaseBdev1 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.612 BaseBdev2_malloc 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.612 [2024-11-18 23:06:58.842964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:39.612 [2024-11-18 23:06:58.843080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.612 [2024-11-18 23:06:58.843130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:39.612 [2024-11-18 23:06:58.843153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.612 [2024-11-18 23:06:58.847792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.612 [2024-11-18 23:06:58.847848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:39.612 BaseBdev2 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.612 spare_malloc 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.612 spare_delay 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.612 [2024-11-18 23:06:58.886046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:39.612 [2024-11-18 23:06:58.886095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.612 [2024-11-18 23:06:58.886130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:39.612 [2024-11-18 23:06:58.886138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.612 [2024-11-18 23:06:58.888242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.612 [2024-11-18 23:06:58.888329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:39.612 spare 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.612 [2024-11-18 23:06:58.898056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.612 [2024-11-18 23:06:58.899877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.612 [2024-11-18 23:06:58.899959] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:39.612 [2024-11-18 23:06:58.899970] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:39.612 [2024-11-18 23:06:58.900214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:39.612 [2024-11-18 23:06:58.900355] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:39.612 [2024-11-18 23:06:58.900369] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:39.612 [2024-11-18 23:06:58.900490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.612 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.613 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.613 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.613 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.613 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.613 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.613 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.613 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.613 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.613 "name": "raid_bdev1", 00:11:39.613 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:39.613 "strip_size_kb": 0, 00:11:39.613 "state": "online", 00:11:39.613 "raid_level": "raid1", 00:11:39.613 "superblock": false, 00:11:39.613 "num_base_bdevs": 2, 00:11:39.613 "num_base_bdevs_discovered": 2, 00:11:39.613 "num_base_bdevs_operational": 2, 00:11:39.613 "base_bdevs_list": [ 00:11:39.613 { 00:11:39.613 "name": "BaseBdev1", 00:11:39.613 "uuid": "e4e5dc0f-14a0-53c3-b31b-4dc13abe3eb0", 00:11:39.613 "is_configured": true, 00:11:39.613 "data_offset": 0, 00:11:39.613 "data_size": 65536 00:11:39.613 }, 00:11:39.613 { 00:11:39.613 "name": "BaseBdev2", 00:11:39.613 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:39.613 "is_configured": true, 00:11:39.613 "data_offset": 0, 00:11:39.613 "data_size": 65536 00:11:39.613 } 00:11:39.613 ] 00:11:39.613 }' 00:11:39.613 23:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.613 23:06:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.181 [2024-11-18 23:06:59.301624] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.181 [2024-11-18 23:06:59.381194] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.181 "name": "raid_bdev1", 00:11:40.181 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:40.181 "strip_size_kb": 0, 00:11:40.181 "state": "online", 00:11:40.181 "raid_level": "raid1", 00:11:40.181 "superblock": false, 00:11:40.181 "num_base_bdevs": 2, 00:11:40.181 "num_base_bdevs_discovered": 1, 00:11:40.181 "num_base_bdevs_operational": 1, 00:11:40.181 "base_bdevs_list": [ 00:11:40.181 { 00:11:40.181 "name": null, 00:11:40.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.181 "is_configured": false, 00:11:40.181 "data_offset": 0, 00:11:40.181 "data_size": 65536 00:11:40.181 }, 00:11:40.181 { 00:11:40.181 "name": "BaseBdev2", 00:11:40.181 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:40.181 "is_configured": true, 00:11:40.181 "data_offset": 0, 00:11:40.181 "data_size": 65536 00:11:40.181 } 00:11:40.181 ] 00:11:40.181 }' 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.181 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.181 [2024-11-18 23:06:59.463023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:40.181 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:40.181 Zero copy mechanism will not be used. 00:11:40.181 Running I/O for 60 seconds... 00:11:40.440 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:40.440 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.440 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.699 [2024-11-18 23:06:59.816726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:40.699 23:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.699 23:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:40.699 [2024-11-18 23:06:59.858821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:40.699 [2024-11-18 23:06:59.860743] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:40.699 [2024-11-18 23:06:59.972241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:40.699 [2024-11-18 23:06:59.972625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:40.960 [2024-11-18 23:07:00.180730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:40.960 [2024-11-18 23:07:00.181085] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:41.226 200.00 IOPS, 600.00 MiB/s [2024-11-18T23:07:00.604Z] [2024-11-18 23:07:00.509559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:41.226 [2024-11-18 23:07:00.509977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:41.494 [2024-11-18 23:07:00.712954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:41.494 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.494 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.494 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.494 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.494 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.494 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.494 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.494 23:07:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.494 23:07:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.754 23:07:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.754 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.754 "name": "raid_bdev1", 00:11:41.754 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:41.754 "strip_size_kb": 0, 00:11:41.754 "state": "online", 00:11:41.754 "raid_level": "raid1", 00:11:41.754 "superblock": false, 00:11:41.754 "num_base_bdevs": 2, 00:11:41.754 "num_base_bdevs_discovered": 2, 00:11:41.754 "num_base_bdevs_operational": 2, 00:11:41.754 "process": { 00:11:41.754 "type": "rebuild", 00:11:41.754 "target": "spare", 00:11:41.754 "progress": { 00:11:41.754 "blocks": 12288, 00:11:41.754 "percent": 18 00:11:41.754 } 00:11:41.754 }, 00:11:41.754 "base_bdevs_list": [ 00:11:41.754 { 00:11:41.754 "name": "spare", 00:11:41.754 "uuid": "41ffe7de-9583-56fb-8144-21ce15f5c49e", 00:11:41.754 "is_configured": true, 00:11:41.754 "data_offset": 0, 00:11:41.754 "data_size": 65536 00:11:41.754 }, 00:11:41.754 { 00:11:41.754 "name": "BaseBdev2", 00:11:41.754 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:41.754 "is_configured": true, 00:11:41.754 "data_offset": 0, 00:11:41.754 "data_size": 65536 00:11:41.754 } 00:11:41.754 ] 00:11:41.754 }' 00:11:41.754 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.754 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.754 [2024-11-18 23:07:00.927949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:41.754 [2024-11-18 23:07:00.928411] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:41.754 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.754 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.754 23:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:41.754 23:07:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.754 23:07:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.754 [2024-11-18 23:07:00.961180] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:41.754 [2024-11-18 23:07:01.035461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:42.014 [2024-11-18 23:07:01.141336] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:42.014 [2024-11-18 23:07:01.153926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.014 [2024-11-18 23:07:01.153971] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:42.014 [2024-11-18 23:07:01.153984] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:42.014 [2024-11-18 23:07:01.170221] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.014 "name": "raid_bdev1", 00:11:42.014 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:42.014 "strip_size_kb": 0, 00:11:42.014 "state": "online", 00:11:42.014 "raid_level": "raid1", 00:11:42.014 "superblock": false, 00:11:42.014 "num_base_bdevs": 2, 00:11:42.014 "num_base_bdevs_discovered": 1, 00:11:42.014 "num_base_bdevs_operational": 1, 00:11:42.014 "base_bdevs_list": [ 00:11:42.014 { 00:11:42.014 "name": null, 00:11:42.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.014 "is_configured": false, 00:11:42.014 "data_offset": 0, 00:11:42.014 "data_size": 65536 00:11:42.014 }, 00:11:42.014 { 00:11:42.014 "name": "BaseBdev2", 00:11:42.014 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:42.014 "is_configured": true, 00:11:42.014 "data_offset": 0, 00:11:42.014 "data_size": 65536 00:11:42.014 } 00:11:42.014 ] 00:11:42.014 }' 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.014 23:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.273 171.50 IOPS, 514.50 MiB/s [2024-11-18T23:07:01.651Z] 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:42.273 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.273 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:42.273 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:42.273 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.273 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.273 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.273 23:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.273 23:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.273 23:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.533 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.533 "name": "raid_bdev1", 00:11:42.533 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:42.533 "strip_size_kb": 0, 00:11:42.533 "state": "online", 00:11:42.533 "raid_level": "raid1", 00:11:42.533 "superblock": false, 00:11:42.533 "num_base_bdevs": 2, 00:11:42.533 "num_base_bdevs_discovered": 1, 00:11:42.533 "num_base_bdevs_operational": 1, 00:11:42.533 "base_bdevs_list": [ 00:11:42.533 { 00:11:42.533 "name": null, 00:11:42.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.533 "is_configured": false, 00:11:42.533 "data_offset": 0, 00:11:42.533 "data_size": 65536 00:11:42.533 }, 00:11:42.533 { 00:11:42.533 "name": "BaseBdev2", 00:11:42.533 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:42.533 "is_configured": true, 00:11:42.533 "data_offset": 0, 00:11:42.533 "data_size": 65536 00:11:42.533 } 00:11:42.533 ] 00:11:42.533 }' 00:11:42.533 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.533 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:42.533 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.533 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:42.533 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:42.533 23:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.533 23:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.533 [2024-11-18 23:07:01.765706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:42.533 23:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.533 23:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:42.533 [2024-11-18 23:07:01.802421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:42.533 [2024-11-18 23:07:01.804321] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:42.806 [2024-11-18 23:07:01.921294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:42.807 [2024-11-18 23:07:01.921710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:42.807 [2024-11-18 23:07:02.129459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:42.807 [2024-11-18 23:07:02.129731] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:43.385 [2024-11-18 23:07:02.457554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:43.385 173.67 IOPS, 521.00 MiB/s [2024-11-18T23:07:02.763Z] [2024-11-18 23:07:02.687530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.646 "name": "raid_bdev1", 00:11:43.646 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:43.646 "strip_size_kb": 0, 00:11:43.646 "state": "online", 00:11:43.646 "raid_level": "raid1", 00:11:43.646 "superblock": false, 00:11:43.646 "num_base_bdevs": 2, 00:11:43.646 "num_base_bdevs_discovered": 2, 00:11:43.646 "num_base_bdevs_operational": 2, 00:11:43.646 "process": { 00:11:43.646 "type": "rebuild", 00:11:43.646 "target": "spare", 00:11:43.646 "progress": { 00:11:43.646 "blocks": 10240, 00:11:43.646 "percent": 15 00:11:43.646 } 00:11:43.646 }, 00:11:43.646 "base_bdevs_list": [ 00:11:43.646 { 00:11:43.646 "name": "spare", 00:11:43.646 "uuid": "41ffe7de-9583-56fb-8144-21ce15f5c49e", 00:11:43.646 "is_configured": true, 00:11:43.646 "data_offset": 0, 00:11:43.646 "data_size": 65536 00:11:43.646 }, 00:11:43.646 { 00:11:43.646 "name": "BaseBdev2", 00:11:43.646 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:43.646 "is_configured": true, 00:11:43.646 "data_offset": 0, 00:11:43.646 "data_size": 65536 00:11:43.646 } 00:11:43.646 ] 00:11:43.646 }' 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=318 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.646 "name": "raid_bdev1", 00:11:43.646 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:43.646 "strip_size_kb": 0, 00:11:43.646 "state": "online", 00:11:43.646 "raid_level": "raid1", 00:11:43.646 "superblock": false, 00:11:43.646 "num_base_bdevs": 2, 00:11:43.646 "num_base_bdevs_discovered": 2, 00:11:43.646 "num_base_bdevs_operational": 2, 00:11:43.646 "process": { 00:11:43.646 "type": "rebuild", 00:11:43.646 "target": "spare", 00:11:43.646 "progress": { 00:11:43.646 "blocks": 12288, 00:11:43.646 "percent": 18 00:11:43.646 } 00:11:43.646 }, 00:11:43.646 "base_bdevs_list": [ 00:11:43.646 { 00:11:43.646 "name": "spare", 00:11:43.646 "uuid": "41ffe7de-9583-56fb-8144-21ce15f5c49e", 00:11:43.646 "is_configured": true, 00:11:43.646 "data_offset": 0, 00:11:43.646 "data_size": 65536 00:11:43.646 }, 00:11:43.646 { 00:11:43.646 "name": "BaseBdev2", 00:11:43.646 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:43.646 "is_configured": true, 00:11:43.646 "data_offset": 0, 00:11:43.646 "data_size": 65536 00:11:43.646 } 00:11:43.646 ] 00:11:43.646 }' 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.646 23:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.906 23:07:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.906 23:07:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:43.906 [2024-11-18 23:07:03.148881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:44.165 148.25 IOPS, 444.75 MiB/s [2024-11-18T23:07:03.543Z] [2024-11-18 23:07:03.491641] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:44.425 [2024-11-18 23:07:03.597664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:44.685 [2024-11-18 23:07:03.827760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:44.685 [2024-11-18 23:07:03.828197] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:44.685 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:44.685 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.685 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.685 [2024-11-18 23:07:04.036214] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:44.685 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.685 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.685 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.685 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.685 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.685 23:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.685 23:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.945 23:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.945 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.945 "name": "raid_bdev1", 00:11:44.945 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:44.945 "strip_size_kb": 0, 00:11:44.945 "state": "online", 00:11:44.945 "raid_level": "raid1", 00:11:44.945 "superblock": false, 00:11:44.945 "num_base_bdevs": 2, 00:11:44.945 "num_base_bdevs_discovered": 2, 00:11:44.945 "num_base_bdevs_operational": 2, 00:11:44.945 "process": { 00:11:44.945 "type": "rebuild", 00:11:44.945 "target": "spare", 00:11:44.945 "progress": { 00:11:44.945 "blocks": 28672, 00:11:44.945 "percent": 43 00:11:44.945 } 00:11:44.945 }, 00:11:44.945 "base_bdevs_list": [ 00:11:44.945 { 00:11:44.945 "name": "spare", 00:11:44.945 "uuid": "41ffe7de-9583-56fb-8144-21ce15f5c49e", 00:11:44.945 "is_configured": true, 00:11:44.945 "data_offset": 0, 00:11:44.945 "data_size": 65536 00:11:44.945 }, 00:11:44.945 { 00:11:44.945 "name": "BaseBdev2", 00:11:44.945 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:44.945 "is_configured": true, 00:11:44.945 "data_offset": 0, 00:11:44.945 "data_size": 65536 00:11:44.945 } 00:11:44.945 ] 00:11:44.945 }' 00:11:44.945 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.945 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:44.945 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.945 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:44.945 23:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:45.204 [2024-11-18 23:07:04.446463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:45.204 [2024-11-18 23:07:04.446701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:45.775 131.20 IOPS, 393.60 MiB/s [2024-11-18T23:07:05.153Z] [2024-11-18 23:07:04.881159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:45.775 [2024-11-18 23:07:04.881390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:45.775 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:45.775 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.775 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.775 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.775 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.775 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.035 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.035 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.035 23:07:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.035 23:07:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.035 23:07:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.035 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.035 "name": "raid_bdev1", 00:11:46.035 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:46.035 "strip_size_kb": 0, 00:11:46.035 "state": "online", 00:11:46.035 "raid_level": "raid1", 00:11:46.035 "superblock": false, 00:11:46.035 "num_base_bdevs": 2, 00:11:46.035 "num_base_bdevs_discovered": 2, 00:11:46.035 "num_base_bdevs_operational": 2, 00:11:46.035 "process": { 00:11:46.035 "type": "rebuild", 00:11:46.035 "target": "spare", 00:11:46.035 "progress": { 00:11:46.035 "blocks": 43008, 00:11:46.035 "percent": 65 00:11:46.035 } 00:11:46.035 }, 00:11:46.035 "base_bdevs_list": [ 00:11:46.035 { 00:11:46.035 "name": "spare", 00:11:46.035 "uuid": "41ffe7de-9583-56fb-8144-21ce15f5c49e", 00:11:46.035 "is_configured": true, 00:11:46.035 "data_offset": 0, 00:11:46.035 "data_size": 65536 00:11:46.035 }, 00:11:46.035 { 00:11:46.035 "name": "BaseBdev2", 00:11:46.035 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:46.035 "is_configured": true, 00:11:46.035 "data_offset": 0, 00:11:46.035 "data_size": 65536 00:11:46.035 } 00:11:46.035 ] 00:11:46.035 }' 00:11:46.035 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.035 [2024-11-18 23:07:05.203218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:46.035 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.035 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.035 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.035 23:07:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:46.294 115.00 IOPS, 345.00 MiB/s [2024-11-18T23:07:05.672Z] [2024-11-18 23:07:05.647993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:46.912 [2024-11-18 23:07:06.080368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:47.197 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:47.197 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:47.197 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.197 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:47.197 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:47.197 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.197 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.197 23:07:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.197 23:07:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.197 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.197 23:07:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.198 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.198 "name": "raid_bdev1", 00:11:47.198 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:47.198 "strip_size_kb": 0, 00:11:47.198 "state": "online", 00:11:47.198 "raid_level": "raid1", 00:11:47.198 "superblock": false, 00:11:47.198 "num_base_bdevs": 2, 00:11:47.198 "num_base_bdevs_discovered": 2, 00:11:47.198 "num_base_bdevs_operational": 2, 00:11:47.198 "process": { 00:11:47.198 "type": "rebuild", 00:11:47.198 "target": "spare", 00:11:47.198 "progress": { 00:11:47.198 "blocks": 61440, 00:11:47.198 "percent": 93 00:11:47.198 } 00:11:47.198 }, 00:11:47.198 "base_bdevs_list": [ 00:11:47.198 { 00:11:47.198 "name": "spare", 00:11:47.198 "uuid": "41ffe7de-9583-56fb-8144-21ce15f5c49e", 00:11:47.198 "is_configured": true, 00:11:47.198 "data_offset": 0, 00:11:47.198 "data_size": 65536 00:11:47.198 }, 00:11:47.198 { 00:11:47.198 "name": "BaseBdev2", 00:11:47.198 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:47.198 "is_configured": true, 00:11:47.198 "data_offset": 0, 00:11:47.198 "data_size": 65536 00:11:47.198 } 00:11:47.198 ] 00:11:47.198 }' 00:11:47.198 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.198 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:47.198 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.198 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:47.198 23:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:47.198 104.00 IOPS, 312.00 MiB/s [2024-11-18T23:07:06.576Z] [2024-11-18 23:07:06.515093] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:47.471 [2024-11-18 23:07:06.614890] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:47.471 [2024-11-18 23:07:06.616520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.414 95.25 IOPS, 285.75 MiB/s [2024-11-18T23:07:07.792Z] 23:07:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.414 "name": "raid_bdev1", 00:11:48.414 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:48.414 "strip_size_kb": 0, 00:11:48.414 "state": "online", 00:11:48.414 "raid_level": "raid1", 00:11:48.414 "superblock": false, 00:11:48.414 "num_base_bdevs": 2, 00:11:48.414 "num_base_bdevs_discovered": 2, 00:11:48.414 "num_base_bdevs_operational": 2, 00:11:48.414 "base_bdevs_list": [ 00:11:48.414 { 00:11:48.414 "name": "spare", 00:11:48.414 "uuid": "41ffe7de-9583-56fb-8144-21ce15f5c49e", 00:11:48.414 "is_configured": true, 00:11:48.414 "data_offset": 0, 00:11:48.414 "data_size": 65536 00:11:48.414 }, 00:11:48.414 { 00:11:48.414 "name": "BaseBdev2", 00:11:48.414 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:48.414 "is_configured": true, 00:11:48.414 "data_offset": 0, 00:11:48.414 "data_size": 65536 00:11:48.414 } 00:11:48.414 ] 00:11:48.414 }' 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.414 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.414 "name": "raid_bdev1", 00:11:48.414 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:48.414 "strip_size_kb": 0, 00:11:48.414 "state": "online", 00:11:48.414 "raid_level": "raid1", 00:11:48.414 "superblock": false, 00:11:48.414 "num_base_bdevs": 2, 00:11:48.414 "num_base_bdevs_discovered": 2, 00:11:48.414 "num_base_bdevs_operational": 2, 00:11:48.414 "base_bdevs_list": [ 00:11:48.414 { 00:11:48.414 "name": "spare", 00:11:48.414 "uuid": "41ffe7de-9583-56fb-8144-21ce15f5c49e", 00:11:48.414 "is_configured": true, 00:11:48.414 "data_offset": 0, 00:11:48.414 "data_size": 65536 00:11:48.414 }, 00:11:48.414 { 00:11:48.414 "name": "BaseBdev2", 00:11:48.414 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:48.414 "is_configured": true, 00:11:48.414 "data_offset": 0, 00:11:48.414 "data_size": 65536 00:11:48.414 } 00:11:48.414 ] 00:11:48.414 }' 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.415 "name": "raid_bdev1", 00:11:48.415 "uuid": "0ab2fea9-6b87-44ce-a0d5-53305401a7a5", 00:11:48.415 "strip_size_kb": 0, 00:11:48.415 "state": "online", 00:11:48.415 "raid_level": "raid1", 00:11:48.415 "superblock": false, 00:11:48.415 "num_base_bdevs": 2, 00:11:48.415 "num_base_bdevs_discovered": 2, 00:11:48.415 "num_base_bdevs_operational": 2, 00:11:48.415 "base_bdevs_list": [ 00:11:48.415 { 00:11:48.415 "name": "spare", 00:11:48.415 "uuid": "41ffe7de-9583-56fb-8144-21ce15f5c49e", 00:11:48.415 "is_configured": true, 00:11:48.415 "data_offset": 0, 00:11:48.415 "data_size": 65536 00:11:48.415 }, 00:11:48.415 { 00:11:48.415 "name": "BaseBdev2", 00:11:48.415 "uuid": "b185a451-1d1e-5dd6-8eb5-89046501d38d", 00:11:48.415 "is_configured": true, 00:11:48.415 "data_offset": 0, 00:11:48.415 "data_size": 65536 00:11:48.415 } 00:11:48.415 ] 00:11:48.415 }' 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.415 23:07:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.989 [2024-11-18 23:07:08.202222] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.989 [2024-11-18 23:07:08.202255] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.989 00:11:48.989 Latency(us) 00:11:48.989 [2024-11-18T23:07:08.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.989 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:48.989 raid_bdev1 : 8.77 90.31 270.93 0.00 0.00 14865.64 271.87 113557.58 00:11:48.989 [2024-11-18T23:07:08.367Z] =================================================================================================================== 00:11:48.989 [2024-11-18T23:07:08.367Z] Total : 90.31 270.93 0.00 0.00 14865.64 271.87 113557.58 00:11:48.989 [2024-11-18 23:07:08.221589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.989 [2024-11-18 23:07:08.221627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.989 [2024-11-18 23:07:08.221731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.989 [2024-11-18 23:07:08.221744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:48.989 { 00:11:48.989 "results": [ 00:11:48.989 { 00:11:48.989 "job": "raid_bdev1", 00:11:48.989 "core_mask": "0x1", 00:11:48.989 "workload": "randrw", 00:11:48.989 "percentage": 50, 00:11:48.989 "status": "finished", 00:11:48.989 "queue_depth": 2, 00:11:48.989 "io_size": 3145728, 00:11:48.989 "runtime": 8.769839, 00:11:48.989 "iops": 90.3095256366736, 00:11:48.989 "mibps": 270.9285769100208, 00:11:48.989 "io_failed": 0, 00:11:48.989 "io_timeout": 0, 00:11:48.989 "avg_latency_us": 14865.638039786512, 00:11:48.989 "min_latency_us": 271.87423580786026, 00:11:48.989 "max_latency_us": 113557.57554585153 00:11:48.989 } 00:11:48.989 ], 00:11:48.989 "core_count": 1 00:11:48.989 } 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:48.989 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:49.250 /dev/nbd0 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.250 1+0 records in 00:11:49.250 1+0 records out 00:11:49.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387797 s, 10.6 MB/s 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:49.250 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:49.509 /dev/nbd1 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.509 1+0 records in 00:11:49.509 1+0 records out 00:11:49.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224072 s, 18.3 MB/s 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.509 23:07:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.770 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87072 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87072 ']' 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87072 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87072 00:11:50.033 killing process with pid 87072 00:11:50.033 Received shutdown signal, test time was about 9.834325 seconds 00:11:50.033 00:11:50.033 Latency(us) 00:11:50.033 [2024-11-18T23:07:09.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.033 [2024-11-18T23:07:09.411Z] =================================================================================================================== 00:11:50.033 [2024-11-18T23:07:09.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87072' 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87072 00:11:50.033 [2024-11-18 23:07:09.280554] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.033 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87072 00:11:50.033 [2024-11-18 23:07:09.307445] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.292 23:07:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:50.292 00:11:50.292 real 0m11.676s 00:11:50.292 user 0m14.755s 00:11:50.292 sys 0m1.413s 00:11:50.292 ************************************ 00:11:50.292 END TEST raid_rebuild_test_io 00:11:50.292 ************************************ 00:11:50.292 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.292 23:07:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.292 23:07:09 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:50.292 23:07:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:50.292 23:07:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.292 23:07:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.292 ************************************ 00:11:50.292 START TEST raid_rebuild_test_sb_io 00:11:50.292 ************************************ 00:11:50.292 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:11:50.292 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:50.292 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:50.292 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:50.292 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87459 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87459 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87459 ']' 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.293 23:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.553 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:50.553 Zero copy mechanism will not be used. 00:11:50.553 [2024-11-18 23:07:09.713286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:50.553 [2024-11-18 23:07:09.713427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87459 ] 00:11:50.553 [2024-11-18 23:07:09.871844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.553 [2024-11-18 23:07:09.915969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.813 [2024-11-18 23:07:09.958115] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.813 [2024-11-18 23:07:09.958158] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 BaseBdev1_malloc 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 [2024-11-18 23:07:10.551867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:51.383 [2024-11-18 23:07:10.551945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.383 [2024-11-18 23:07:10.551968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:51.383 [2024-11-18 23:07:10.551982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.383 [2024-11-18 23:07:10.554161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.383 [2024-11-18 23:07:10.554202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.383 BaseBdev1 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 BaseBdev2_malloc 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 [2024-11-18 23:07:10.597005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:51.383 [2024-11-18 23:07:10.597201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.383 [2024-11-18 23:07:10.597309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:51.383 [2024-11-18 23:07:10.597386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.383 [2024-11-18 23:07:10.602162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.383 [2024-11-18 23:07:10.602331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.383 BaseBdev2 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 spare_malloc 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 spare_delay 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 [2024-11-18 23:07:10.640006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:51.383 [2024-11-18 23:07:10.640112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.383 [2024-11-18 23:07:10.640153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:51.383 [2024-11-18 23:07:10.640181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.383 [2024-11-18 23:07:10.642231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.383 [2024-11-18 23:07:10.642328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:51.383 spare 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 [2024-11-18 23:07:10.652026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.383 [2024-11-18 23:07:10.653894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.383 [2024-11-18 23:07:10.654120] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:51.383 [2024-11-18 23:07:10.654154] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.383 [2024-11-18 23:07:10.654466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:51.383 [2024-11-18 23:07:10.654637] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:51.383 [2024-11-18 23:07:10.654682] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:51.383 [2024-11-18 23:07:10.654837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.383 "name": "raid_bdev1", 00:11:51.383 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:51.383 "strip_size_kb": 0, 00:11:51.383 "state": "online", 00:11:51.383 "raid_level": "raid1", 00:11:51.383 "superblock": true, 00:11:51.383 "num_base_bdevs": 2, 00:11:51.383 "num_base_bdevs_discovered": 2, 00:11:51.383 "num_base_bdevs_operational": 2, 00:11:51.383 "base_bdevs_list": [ 00:11:51.383 { 00:11:51.383 "name": "BaseBdev1", 00:11:51.383 "uuid": "163b602a-3fcf-592b-b545-1fa2fad71185", 00:11:51.383 "is_configured": true, 00:11:51.383 "data_offset": 2048, 00:11:51.383 "data_size": 63488 00:11:51.383 }, 00:11:51.383 { 00:11:51.383 "name": "BaseBdev2", 00:11:51.383 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:51.383 "is_configured": true, 00:11:51.383 "data_offset": 2048, 00:11:51.383 "data_size": 63488 00:11:51.383 } 00:11:51.383 ] 00:11:51.383 }' 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.383 23:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.954 [2024-11-18 23:07:11.079602] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.954 [2024-11-18 23:07:11.179104] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.954 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.955 "name": "raid_bdev1", 00:11:51.955 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:51.955 "strip_size_kb": 0, 00:11:51.955 "state": "online", 00:11:51.955 "raid_level": "raid1", 00:11:51.955 "superblock": true, 00:11:51.955 "num_base_bdevs": 2, 00:11:51.955 "num_base_bdevs_discovered": 1, 00:11:51.955 "num_base_bdevs_operational": 1, 00:11:51.955 "base_bdevs_list": [ 00:11:51.955 { 00:11:51.955 "name": null, 00:11:51.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.955 "is_configured": false, 00:11:51.955 "data_offset": 0, 00:11:51.955 "data_size": 63488 00:11:51.955 }, 00:11:51.955 { 00:11:51.955 "name": "BaseBdev2", 00:11:51.955 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:51.955 "is_configured": true, 00:11:51.955 "data_offset": 2048, 00:11:51.955 "data_size": 63488 00:11:51.955 } 00:11:51.955 ] 00:11:51.955 }' 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.955 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.955 [2024-11-18 23:07:11.272909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:51.955 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:51.955 Zero copy mechanism will not be used. 00:11:51.955 Running I/O for 60 seconds... 00:11:52.526 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:52.526 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.526 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.526 [2024-11-18 23:07:11.649003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:52.526 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.526 23:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:52.526 [2024-11-18 23:07:11.690455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:52.526 [2024-11-18 23:07:11.692415] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:52.526 [2024-11-18 23:07:11.804794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:52.526 [2024-11-18 23:07:11.805289] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:52.787 [2024-11-18 23:07:11.918946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:52.787 [2024-11-18 23:07:11.919347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:53.046 [2024-11-18 23:07:12.268195] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:53.306 211.00 IOPS, 633.00 MiB/s [2024-11-18T23:07:12.684Z] [2024-11-18 23:07:12.494914] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:53.306 [2024-11-18 23:07:12.495248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:53.306 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.306 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.306 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.306 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.306 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.566 "name": "raid_bdev1", 00:11:53.566 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:53.566 "strip_size_kb": 0, 00:11:53.566 "state": "online", 00:11:53.566 "raid_level": "raid1", 00:11:53.566 "superblock": true, 00:11:53.566 "num_base_bdevs": 2, 00:11:53.566 "num_base_bdevs_discovered": 2, 00:11:53.566 "num_base_bdevs_operational": 2, 00:11:53.566 "process": { 00:11:53.566 "type": "rebuild", 00:11:53.566 "target": "spare", 00:11:53.566 "progress": { 00:11:53.566 "blocks": 10240, 00:11:53.566 "percent": 16 00:11:53.566 } 00:11:53.566 }, 00:11:53.566 "base_bdevs_list": [ 00:11:53.566 { 00:11:53.566 "name": "spare", 00:11:53.566 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:11:53.566 "is_configured": true, 00:11:53.566 "data_offset": 2048, 00:11:53.566 "data_size": 63488 00:11:53.566 }, 00:11:53.566 { 00:11:53.566 "name": "BaseBdev2", 00:11:53.566 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:53.566 "is_configured": true, 00:11:53.566 "data_offset": 2048, 00:11:53.566 "data_size": 63488 00:11:53.566 } 00:11:53.566 ] 00:11:53.566 }' 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.566 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.566 [2024-11-18 23:07:12.819086] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.566 [2024-11-18 23:07:12.924038] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:53.566 [2024-11-18 23:07:12.931332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.566 [2024-11-18 23:07:12.931398] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.566 [2024-11-18 23:07:12.931428] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:53.831 [2024-11-18 23:07:12.953388] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.831 23:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.831 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.831 "name": "raid_bdev1", 00:11:53.831 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:53.831 "strip_size_kb": 0, 00:11:53.831 "state": "online", 00:11:53.831 "raid_level": "raid1", 00:11:53.831 "superblock": true, 00:11:53.831 "num_base_bdevs": 2, 00:11:53.831 "num_base_bdevs_discovered": 1, 00:11:53.831 "num_base_bdevs_operational": 1, 00:11:53.831 "base_bdevs_list": [ 00:11:53.831 { 00:11:53.831 "name": null, 00:11:53.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.831 "is_configured": false, 00:11:53.831 "data_offset": 0, 00:11:53.831 "data_size": 63488 00:11:53.831 }, 00:11:53.831 { 00:11:53.831 "name": "BaseBdev2", 00:11:53.831 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:53.831 "is_configured": true, 00:11:53.831 "data_offset": 2048, 00:11:53.831 "data_size": 63488 00:11:53.831 } 00:11:53.831 ] 00:11:53.831 }' 00:11:53.831 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.831 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.089 190.50 IOPS, 571.50 MiB/s [2024-11-18T23:07:13.467Z] 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:54.089 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.089 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:54.089 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:54.089 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.089 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.089 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.089 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.089 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.089 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.349 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.349 "name": "raid_bdev1", 00:11:54.349 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:54.349 "strip_size_kb": 0, 00:11:54.349 "state": "online", 00:11:54.349 "raid_level": "raid1", 00:11:54.349 "superblock": true, 00:11:54.349 "num_base_bdevs": 2, 00:11:54.350 "num_base_bdevs_discovered": 1, 00:11:54.350 "num_base_bdevs_operational": 1, 00:11:54.350 "base_bdevs_list": [ 00:11:54.350 { 00:11:54.350 "name": null, 00:11:54.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.350 "is_configured": false, 00:11:54.350 "data_offset": 0, 00:11:54.350 "data_size": 63488 00:11:54.350 }, 00:11:54.350 { 00:11:54.350 "name": "BaseBdev2", 00:11:54.350 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:54.350 "is_configured": true, 00:11:54.350 "data_offset": 2048, 00:11:54.350 "data_size": 63488 00:11:54.350 } 00:11:54.350 ] 00:11:54.350 }' 00:11:54.350 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.350 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:54.350 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.350 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:54.350 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:54.350 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.350 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.350 [2024-11-18 23:07:13.540260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:54.350 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.350 23:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:54.350 [2024-11-18 23:07:13.571262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:54.350 [2024-11-18 23:07:13.573133] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:54.350 [2024-11-18 23:07:13.686292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:54.350 [2024-11-18 23:07:13.686795] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:54.619 [2024-11-18 23:07:13.889441] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:54.619 [2024-11-18 23:07:13.889710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:54.881 [2024-11-18 23:07:14.237244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:54.881 [2024-11-18 23:07:14.237686] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:55.141 189.33 IOPS, 568.00 MiB/s [2024-11-18T23:07:14.519Z] [2024-11-18 23:07:14.451626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:55.141 [2024-11-18 23:07:14.451844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.402 "name": "raid_bdev1", 00:11:55.402 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:55.402 "strip_size_kb": 0, 00:11:55.402 "state": "online", 00:11:55.402 "raid_level": "raid1", 00:11:55.402 "superblock": true, 00:11:55.402 "num_base_bdevs": 2, 00:11:55.402 "num_base_bdevs_discovered": 2, 00:11:55.402 "num_base_bdevs_operational": 2, 00:11:55.402 "process": { 00:11:55.402 "type": "rebuild", 00:11:55.402 "target": "spare", 00:11:55.402 "progress": { 00:11:55.402 "blocks": 10240, 00:11:55.402 "percent": 16 00:11:55.402 } 00:11:55.402 }, 00:11:55.402 "base_bdevs_list": [ 00:11:55.402 { 00:11:55.402 "name": "spare", 00:11:55.402 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:11:55.402 "is_configured": true, 00:11:55.402 "data_offset": 2048, 00:11:55.402 "data_size": 63488 00:11:55.402 }, 00:11:55.402 { 00:11:55.402 "name": "BaseBdev2", 00:11:55.402 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:55.402 "is_configured": true, 00:11:55.402 "data_offset": 2048, 00:11:55.402 "data_size": 63488 00:11:55.402 } 00:11:55.402 ] 00:11:55.402 }' 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:55.402 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=330 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.402 "name": "raid_bdev1", 00:11:55.402 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:55.402 "strip_size_kb": 0, 00:11:55.402 "state": "online", 00:11:55.402 "raid_level": "raid1", 00:11:55.402 "superblock": true, 00:11:55.402 "num_base_bdevs": 2, 00:11:55.402 "num_base_bdevs_discovered": 2, 00:11:55.402 "num_base_bdevs_operational": 2, 00:11:55.402 "process": { 00:11:55.402 "type": "rebuild", 00:11:55.402 "target": "spare", 00:11:55.402 "progress": { 00:11:55.402 "blocks": 12288, 00:11:55.402 "percent": 19 00:11:55.402 } 00:11:55.402 }, 00:11:55.402 "base_bdevs_list": [ 00:11:55.402 { 00:11:55.402 "name": "spare", 00:11:55.402 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:11:55.402 "is_configured": true, 00:11:55.402 "data_offset": 2048, 00:11:55.402 "data_size": 63488 00:11:55.402 }, 00:11:55.402 { 00:11:55.402 "name": "BaseBdev2", 00:11:55.402 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:55.402 "is_configured": true, 00:11:55.402 "data_offset": 2048, 00:11:55.402 "data_size": 63488 00:11:55.402 } 00:11:55.402 ] 00:11:55.402 }' 00:11:55.402 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.662 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.662 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.662 [2024-11-18 23:07:14.792899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:55.662 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.662 23:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:55.662 [2024-11-18 23:07:15.000084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:55.662 [2024-11-18 23:07:15.000378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:56.231 155.25 IOPS, 465.75 MiB/s [2024-11-18T23:07:15.610Z] [2024-11-18 23:07:15.445102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:56.492 [2024-11-18 23:07:15.654348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:56.492 [2024-11-18 23:07:15.654793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:56.492 [2024-11-18 23:07:15.769284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:56.492 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:56.492 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.492 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.492 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.492 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.492 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.492 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.492 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.492 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.492 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.492 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.752 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.752 "name": "raid_bdev1", 00:11:56.752 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:56.752 "strip_size_kb": 0, 00:11:56.752 "state": "online", 00:11:56.752 "raid_level": "raid1", 00:11:56.752 "superblock": true, 00:11:56.752 "num_base_bdevs": 2, 00:11:56.752 "num_base_bdevs_discovered": 2, 00:11:56.752 "num_base_bdevs_operational": 2, 00:11:56.752 "process": { 00:11:56.752 "type": "rebuild", 00:11:56.752 "target": "spare", 00:11:56.752 "progress": { 00:11:56.752 "blocks": 28672, 00:11:56.752 "percent": 45 00:11:56.752 } 00:11:56.752 }, 00:11:56.752 "base_bdevs_list": [ 00:11:56.752 { 00:11:56.752 "name": "spare", 00:11:56.752 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:11:56.752 "is_configured": true, 00:11:56.752 "data_offset": 2048, 00:11:56.752 "data_size": 63488 00:11:56.752 }, 00:11:56.752 { 00:11:56.752 "name": "BaseBdev2", 00:11:56.752 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:56.752 "is_configured": true, 00:11:56.752 "data_offset": 2048, 00:11:56.752 "data_size": 63488 00:11:56.752 } 00:11:56.752 ] 00:11:56.752 }' 00:11:56.752 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.752 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.752 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.752 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.752 23:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:56.752 [2024-11-18 23:07:15.990414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:56.752 [2024-11-18 23:07:16.097349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:57.582 139.00 IOPS, 417.00 MiB/s [2024-11-18T23:07:16.960Z] [2024-11-18 23:07:16.765414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:57.582 [2024-11-18 23:07:16.765755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:57.842 23:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:57.842 23:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.842 23:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.842 23:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.842 23:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.842 23:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.842 23:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.842 23:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.842 23:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.842 23:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.842 23:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.842 23:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.842 "name": "raid_bdev1", 00:11:57.842 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:57.842 "strip_size_kb": 0, 00:11:57.842 "state": "online", 00:11:57.842 "raid_level": "raid1", 00:11:57.842 "superblock": true, 00:11:57.842 "num_base_bdevs": 2, 00:11:57.842 "num_base_bdevs_discovered": 2, 00:11:57.842 "num_base_bdevs_operational": 2, 00:11:57.842 "process": { 00:11:57.842 "type": "rebuild", 00:11:57.842 "target": "spare", 00:11:57.842 "progress": { 00:11:57.842 "blocks": 47104, 00:11:57.842 "percent": 74 00:11:57.842 } 00:11:57.842 }, 00:11:57.842 "base_bdevs_list": [ 00:11:57.842 { 00:11:57.842 "name": "spare", 00:11:57.842 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:11:57.842 "is_configured": true, 00:11:57.842 "data_offset": 2048, 00:11:57.842 "data_size": 63488 00:11:57.842 }, 00:11:57.842 { 00:11:57.842 "name": "BaseBdev2", 00:11:57.842 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:57.842 "is_configured": true, 00:11:57.842 "data_offset": 2048, 00:11:57.842 "data_size": 63488 00:11:57.842 } 00:11:57.842 ] 00:11:57.842 }' 00:11:57.842 23:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.842 23:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:57.842 23:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.842 23:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:57.842 23:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:57.842 [2024-11-18 23:07:17.200793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:57.842 [2024-11-18 23:07:17.201097] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:58.361 122.33 IOPS, 367.00 MiB/s [2024-11-18T23:07:17.739Z] [2024-11-18 23:07:17.518104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:11:58.361 [2024-11-18 23:07:17.620770] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:58.621 [2024-11-18 23:07:17.841149] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:58.621 [2024-11-18 23:07:17.940936] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:58.621 [2024-11-18 23:07:17.942489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.881 "name": "raid_bdev1", 00:11:58.881 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:58.881 "strip_size_kb": 0, 00:11:58.881 "state": "online", 00:11:58.881 "raid_level": "raid1", 00:11:58.881 "superblock": true, 00:11:58.881 "num_base_bdevs": 2, 00:11:58.881 "num_base_bdevs_discovered": 2, 00:11:58.881 "num_base_bdevs_operational": 2, 00:11:58.881 "base_bdevs_list": [ 00:11:58.881 { 00:11:58.881 "name": "spare", 00:11:58.881 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:11:58.881 "is_configured": true, 00:11:58.881 "data_offset": 2048, 00:11:58.881 "data_size": 63488 00:11:58.881 }, 00:11:58.881 { 00:11:58.881 "name": "BaseBdev2", 00:11:58.881 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:58.881 "is_configured": true, 00:11:58.881 "data_offset": 2048, 00:11:58.881 "data_size": 63488 00:11:58.881 } 00:11:58.881 ] 00:11:58.881 }' 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.881 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.141 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.141 110.14 IOPS, 330.43 MiB/s [2024-11-18T23:07:18.519Z] 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.141 "name": "raid_bdev1", 00:11:59.141 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:59.141 "strip_size_kb": 0, 00:11:59.141 "state": "online", 00:11:59.141 "raid_level": "raid1", 00:11:59.141 "superblock": true, 00:11:59.141 "num_base_bdevs": 2, 00:11:59.141 "num_base_bdevs_discovered": 2, 00:11:59.141 "num_base_bdevs_operational": 2, 00:11:59.141 "base_bdevs_list": [ 00:11:59.141 { 00:11:59.141 "name": "spare", 00:11:59.141 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:11:59.141 "is_configured": true, 00:11:59.141 "data_offset": 2048, 00:11:59.141 "data_size": 63488 00:11:59.141 }, 00:11:59.141 { 00:11:59.141 "name": "BaseBdev2", 00:11:59.141 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:59.141 "is_configured": true, 00:11:59.141 "data_offset": 2048, 00:11:59.141 "data_size": 63488 00:11:59.141 } 00:11:59.141 ] 00:11:59.141 }' 00:11:59.141 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.141 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:59.141 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.141 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:59.141 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:59.141 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.141 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.141 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.141 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.142 "name": "raid_bdev1", 00:11:59.142 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:11:59.142 "strip_size_kb": 0, 00:11:59.142 "state": "online", 00:11:59.142 "raid_level": "raid1", 00:11:59.142 "superblock": true, 00:11:59.142 "num_base_bdevs": 2, 00:11:59.142 "num_base_bdevs_discovered": 2, 00:11:59.142 "num_base_bdevs_operational": 2, 00:11:59.142 "base_bdevs_list": [ 00:11:59.142 { 00:11:59.142 "name": "spare", 00:11:59.142 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:11:59.142 "is_configured": true, 00:11:59.142 "data_offset": 2048, 00:11:59.142 "data_size": 63488 00:11:59.142 }, 00:11:59.142 { 00:11:59.142 "name": "BaseBdev2", 00:11:59.142 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:11:59.142 "is_configured": true, 00:11:59.142 "data_offset": 2048, 00:11:59.142 "data_size": 63488 00:11:59.142 } 00:11:59.142 ] 00:11:59.142 }' 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.142 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.712 [2024-11-18 23:07:18.821987] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:59.712 [2024-11-18 23:07:18.822066] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.712 00:11:59.712 Latency(us) 00:11:59.712 [2024-11-18T23:07:19.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.712 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:59.712 raid_bdev1 : 7.66 103.82 311.46 0.00 0.00 12354.99 277.24 108978.64 00:11:59.712 [2024-11-18T23:07:19.090Z] =================================================================================================================== 00:11:59.712 [2024-11-18T23:07:19.090Z] Total : 103.82 311.46 0.00 0.00 12354.99 277.24 108978.64 00:11:59.712 { 00:11:59.712 "results": [ 00:11:59.712 { 00:11:59.712 "job": "raid_bdev1", 00:11:59.712 "core_mask": "0x1", 00:11:59.712 "workload": "randrw", 00:11:59.712 "percentage": 50, 00:11:59.712 "status": "finished", 00:11:59.712 "queue_depth": 2, 00:11:59.712 "io_size": 3145728, 00:11:59.712 "runtime": 7.657546, 00:11:59.712 "iops": 103.81916086432912, 00:11:59.712 "mibps": 311.4574825929874, 00:11:59.712 "io_failed": 0, 00:11:59.712 "io_timeout": 0, 00:11:59.712 "avg_latency_us": 12354.990990634698, 00:11:59.712 "min_latency_us": 277.2401746724891, 00:11:59.712 "max_latency_us": 108978.64104803493 00:11:59.712 } 00:11:59.712 ], 00:11:59.712 "core_count": 1 00:11:59.712 } 00:11:59.712 [2024-11-18 23:07:18.920858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.712 [2024-11-18 23:07:18.920899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.712 [2024-11-18 23:07:18.920977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.712 [2024-11-18 23:07:18.920990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:59.712 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:59.713 23:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:59.977 /dev/nbd0 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.977 1+0 records in 00:11:59.977 1+0 records out 00:11:59.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035066 s, 11.7 MB/s 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:59.977 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:59.978 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:59.978 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:59.978 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:59.978 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:59.978 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:59.978 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:59.978 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:00.236 /dev/nbd1 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:00.236 1+0 records in 00:12:00.236 1+0 records out 00:12:00.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400211 s, 10.2 MB/s 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.236 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.496 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:00.756 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:00.756 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:00.756 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:00.756 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.756 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.757 [2024-11-18 23:07:19.958435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:00.757 [2024-11-18 23:07:19.958529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.757 [2024-11-18 23:07:19.958582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:00.757 [2024-11-18 23:07:19.958614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.757 [2024-11-18 23:07:19.960801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.757 [2024-11-18 23:07:19.960874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:00.757 [2024-11-18 23:07:19.960976] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:00.757 [2024-11-18 23:07:19.961069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:00.757 [2024-11-18 23:07:19.961218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.757 spare 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.757 23:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.757 [2024-11-18 23:07:20.061160] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:00.757 [2024-11-18 23:07:20.061218] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:00.757 [2024-11-18 23:07:20.061524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:12:00.757 [2024-11-18 23:07:20.061689] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:00.757 [2024-11-18 23:07:20.061741] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:00.757 [2024-11-18 23:07:20.061902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.757 "name": "raid_bdev1", 00:12:00.757 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:00.757 "strip_size_kb": 0, 00:12:00.757 "state": "online", 00:12:00.757 "raid_level": "raid1", 00:12:00.757 "superblock": true, 00:12:00.757 "num_base_bdevs": 2, 00:12:00.757 "num_base_bdevs_discovered": 2, 00:12:00.757 "num_base_bdevs_operational": 2, 00:12:00.757 "base_bdevs_list": [ 00:12:00.757 { 00:12:00.757 "name": "spare", 00:12:00.757 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:12:00.757 "is_configured": true, 00:12:00.757 "data_offset": 2048, 00:12:00.757 "data_size": 63488 00:12:00.757 }, 00:12:00.757 { 00:12:00.757 "name": "BaseBdev2", 00:12:00.757 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:00.757 "is_configured": true, 00:12:00.757 "data_offset": 2048, 00:12:00.757 "data_size": 63488 00:12:00.757 } 00:12:00.757 ] 00:12:00.757 }' 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.757 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.325 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:01.325 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.325 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:01.325 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:01.325 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.325 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.325 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.325 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.325 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.325 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.325 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.325 "name": "raid_bdev1", 00:12:01.325 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:01.325 "strip_size_kb": 0, 00:12:01.325 "state": "online", 00:12:01.325 "raid_level": "raid1", 00:12:01.325 "superblock": true, 00:12:01.325 "num_base_bdevs": 2, 00:12:01.325 "num_base_bdevs_discovered": 2, 00:12:01.325 "num_base_bdevs_operational": 2, 00:12:01.325 "base_bdevs_list": [ 00:12:01.325 { 00:12:01.325 "name": "spare", 00:12:01.325 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:12:01.325 "is_configured": true, 00:12:01.325 "data_offset": 2048, 00:12:01.326 "data_size": 63488 00:12:01.326 }, 00:12:01.326 { 00:12:01.326 "name": "BaseBdev2", 00:12:01.326 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:01.326 "is_configured": true, 00:12:01.326 "data_offset": 2048, 00:12:01.326 "data_size": 63488 00:12:01.326 } 00:12:01.326 ] 00:12:01.326 }' 00:12:01.326 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.326 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:01.326 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.326 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:01.326 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:01.326 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.326 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.326 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.326 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.586 [2024-11-18 23:07:20.725224] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.586 "name": "raid_bdev1", 00:12:01.586 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:01.586 "strip_size_kb": 0, 00:12:01.586 "state": "online", 00:12:01.586 "raid_level": "raid1", 00:12:01.586 "superblock": true, 00:12:01.586 "num_base_bdevs": 2, 00:12:01.586 "num_base_bdevs_discovered": 1, 00:12:01.586 "num_base_bdevs_operational": 1, 00:12:01.586 "base_bdevs_list": [ 00:12:01.586 { 00:12:01.586 "name": null, 00:12:01.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.586 "is_configured": false, 00:12:01.586 "data_offset": 0, 00:12:01.586 "data_size": 63488 00:12:01.586 }, 00:12:01.586 { 00:12:01.586 "name": "BaseBdev2", 00:12:01.586 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:01.586 "is_configured": true, 00:12:01.586 "data_offset": 2048, 00:12:01.586 "data_size": 63488 00:12:01.586 } 00:12:01.586 ] 00:12:01.586 }' 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.586 23:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.846 23:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:01.846 23:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.846 23:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.846 [2024-11-18 23:07:21.192489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:01.846 [2024-11-18 23:07:21.192702] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:01.846 [2024-11-18 23:07:21.192775] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:01.846 [2024-11-18 23:07:21.192832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:01.846 [2024-11-18 23:07:21.197329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:12:01.846 23:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.846 23:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:01.846 [2024-11-18 23:07:21.199164] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.227 "name": "raid_bdev1", 00:12:03.227 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:03.227 "strip_size_kb": 0, 00:12:03.227 "state": "online", 00:12:03.227 "raid_level": "raid1", 00:12:03.227 "superblock": true, 00:12:03.227 "num_base_bdevs": 2, 00:12:03.227 "num_base_bdevs_discovered": 2, 00:12:03.227 "num_base_bdevs_operational": 2, 00:12:03.227 "process": { 00:12:03.227 "type": "rebuild", 00:12:03.227 "target": "spare", 00:12:03.227 "progress": { 00:12:03.227 "blocks": 20480, 00:12:03.227 "percent": 32 00:12:03.227 } 00:12:03.227 }, 00:12:03.227 "base_bdevs_list": [ 00:12:03.227 { 00:12:03.227 "name": "spare", 00:12:03.227 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:12:03.227 "is_configured": true, 00:12:03.227 "data_offset": 2048, 00:12:03.227 "data_size": 63488 00:12:03.227 }, 00:12:03.227 { 00:12:03.227 "name": "BaseBdev2", 00:12:03.227 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:03.227 "is_configured": true, 00:12:03.227 "data_offset": 2048, 00:12:03.227 "data_size": 63488 00:12:03.227 } 00:12:03.227 ] 00:12:03.227 }' 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.227 [2024-11-18 23:07:22.363467] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:03.227 [2024-11-18 23:07:22.403336] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:03.227 [2024-11-18 23:07:22.403397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.227 [2024-11-18 23:07:22.403415] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:03.227 [2024-11-18 23:07:22.403422] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.227 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.227 "name": "raid_bdev1", 00:12:03.227 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:03.227 "strip_size_kb": 0, 00:12:03.227 "state": "online", 00:12:03.227 "raid_level": "raid1", 00:12:03.228 "superblock": true, 00:12:03.228 "num_base_bdevs": 2, 00:12:03.228 "num_base_bdevs_discovered": 1, 00:12:03.228 "num_base_bdevs_operational": 1, 00:12:03.228 "base_bdevs_list": [ 00:12:03.228 { 00:12:03.228 "name": null, 00:12:03.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.228 "is_configured": false, 00:12:03.228 "data_offset": 0, 00:12:03.228 "data_size": 63488 00:12:03.228 }, 00:12:03.228 { 00:12:03.228 "name": "BaseBdev2", 00:12:03.228 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:03.228 "is_configured": true, 00:12:03.228 "data_offset": 2048, 00:12:03.228 "data_size": 63488 00:12:03.228 } 00:12:03.228 ] 00:12:03.228 }' 00:12:03.228 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.228 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.799 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:03.799 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.799 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.799 [2024-11-18 23:07:22.891115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:03.799 [2024-11-18 23:07:22.891246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.799 [2024-11-18 23:07:22.891298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:03.799 [2024-11-18 23:07:22.891327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.799 [2024-11-18 23:07:22.891758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.799 [2024-11-18 23:07:22.891815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:03.799 [2024-11-18 23:07:22.891925] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:03.799 [2024-11-18 23:07:22.891963] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:03.799 [2024-11-18 23:07:22.892007] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:03.799 [2024-11-18 23:07:22.892050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:03.799 [2024-11-18 23:07:22.896529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:03.799 spare 00:12:03.799 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.799 23:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:03.799 [2024-11-18 23:07:22.898388] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.745 "name": "raid_bdev1", 00:12:04.745 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:04.745 "strip_size_kb": 0, 00:12:04.745 "state": "online", 00:12:04.745 "raid_level": "raid1", 00:12:04.745 "superblock": true, 00:12:04.745 "num_base_bdevs": 2, 00:12:04.745 "num_base_bdevs_discovered": 2, 00:12:04.745 "num_base_bdevs_operational": 2, 00:12:04.745 "process": { 00:12:04.745 "type": "rebuild", 00:12:04.745 "target": "spare", 00:12:04.745 "progress": { 00:12:04.745 "blocks": 20480, 00:12:04.745 "percent": 32 00:12:04.745 } 00:12:04.745 }, 00:12:04.745 "base_bdevs_list": [ 00:12:04.745 { 00:12:04.745 "name": "spare", 00:12:04.745 "uuid": "c3784d54-5758-5a2b-a6d7-61afa65a3a50", 00:12:04.745 "is_configured": true, 00:12:04.745 "data_offset": 2048, 00:12:04.745 "data_size": 63488 00:12:04.745 }, 00:12:04.745 { 00:12:04.745 "name": "BaseBdev2", 00:12:04.745 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:04.745 "is_configured": true, 00:12:04.745 "data_offset": 2048, 00:12:04.745 "data_size": 63488 00:12:04.745 } 00:12:04.745 ] 00:12:04.745 }' 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.745 23:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.745 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.745 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:04.745 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.745 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.745 [2024-11-18 23:07:24.043312] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.745 [2024-11-18 23:07:24.102567] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:04.745 [2024-11-18 23:07:24.102682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.745 [2024-11-18 23:07:24.102699] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.745 [2024-11-18 23:07:24.102708] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:04.745 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.745 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:04.745 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.745 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.745 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.745 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.746 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:04.746 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.746 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.746 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.746 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.746 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.746 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.746 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.746 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.006 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.006 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.006 "name": "raid_bdev1", 00:12:05.006 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:05.006 "strip_size_kb": 0, 00:12:05.006 "state": "online", 00:12:05.006 "raid_level": "raid1", 00:12:05.006 "superblock": true, 00:12:05.006 "num_base_bdevs": 2, 00:12:05.006 "num_base_bdevs_discovered": 1, 00:12:05.006 "num_base_bdevs_operational": 1, 00:12:05.006 "base_bdevs_list": [ 00:12:05.006 { 00:12:05.006 "name": null, 00:12:05.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.006 "is_configured": false, 00:12:05.006 "data_offset": 0, 00:12:05.006 "data_size": 63488 00:12:05.006 }, 00:12:05.006 { 00:12:05.006 "name": "BaseBdev2", 00:12:05.006 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:05.006 "is_configured": true, 00:12:05.006 "data_offset": 2048, 00:12:05.006 "data_size": 63488 00:12:05.006 } 00:12:05.006 ] 00:12:05.006 }' 00:12:05.006 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.006 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.266 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.266 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.266 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.266 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.266 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.266 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.266 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.266 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.266 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.266 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.267 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.267 "name": "raid_bdev1", 00:12:05.267 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:05.267 "strip_size_kb": 0, 00:12:05.267 "state": "online", 00:12:05.267 "raid_level": "raid1", 00:12:05.267 "superblock": true, 00:12:05.267 "num_base_bdevs": 2, 00:12:05.267 "num_base_bdevs_discovered": 1, 00:12:05.267 "num_base_bdevs_operational": 1, 00:12:05.267 "base_bdevs_list": [ 00:12:05.267 { 00:12:05.267 "name": null, 00:12:05.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.267 "is_configured": false, 00:12:05.267 "data_offset": 0, 00:12:05.267 "data_size": 63488 00:12:05.267 }, 00:12:05.267 { 00:12:05.267 "name": "BaseBdev2", 00:12:05.267 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:05.267 "is_configured": true, 00:12:05.267 "data_offset": 2048, 00:12:05.267 "data_size": 63488 00:12:05.267 } 00:12:05.267 ] 00:12:05.267 }' 00:12:05.267 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.267 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.527 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.527 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.527 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:05.527 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.527 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.527 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.527 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:05.527 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.527 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.527 [2024-11-18 23:07:24.678116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:05.527 [2024-11-18 23:07:24.678172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.527 [2024-11-18 23:07:24.678190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:05.527 [2024-11-18 23:07:24.678200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.527 [2024-11-18 23:07:24.678594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.527 [2024-11-18 23:07:24.678615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.527 [2024-11-18 23:07:24.678681] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:05.527 [2024-11-18 23:07:24.678699] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:05.527 [2024-11-18 23:07:24.678707] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:05.527 [2024-11-18 23:07:24.678724] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:05.527 BaseBdev1 00:12:05.527 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.527 23:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.520 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.520 "name": "raid_bdev1", 00:12:06.520 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:06.520 "strip_size_kb": 0, 00:12:06.520 "state": "online", 00:12:06.520 "raid_level": "raid1", 00:12:06.520 "superblock": true, 00:12:06.521 "num_base_bdevs": 2, 00:12:06.521 "num_base_bdevs_discovered": 1, 00:12:06.521 "num_base_bdevs_operational": 1, 00:12:06.521 "base_bdevs_list": [ 00:12:06.521 { 00:12:06.521 "name": null, 00:12:06.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.521 "is_configured": false, 00:12:06.521 "data_offset": 0, 00:12:06.521 "data_size": 63488 00:12:06.521 }, 00:12:06.521 { 00:12:06.521 "name": "BaseBdev2", 00:12:06.521 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:06.521 "is_configured": true, 00:12:06.521 "data_offset": 2048, 00:12:06.521 "data_size": 63488 00:12:06.521 } 00:12:06.521 ] 00:12:06.521 }' 00:12:06.521 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.521 23:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.786 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.786 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.786 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.786 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.786 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.786 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.786 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.786 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.786 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.786 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.046 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.046 "name": "raid_bdev1", 00:12:07.046 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:07.046 "strip_size_kb": 0, 00:12:07.046 "state": "online", 00:12:07.046 "raid_level": "raid1", 00:12:07.046 "superblock": true, 00:12:07.046 "num_base_bdevs": 2, 00:12:07.046 "num_base_bdevs_discovered": 1, 00:12:07.046 "num_base_bdevs_operational": 1, 00:12:07.046 "base_bdevs_list": [ 00:12:07.046 { 00:12:07.046 "name": null, 00:12:07.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.046 "is_configured": false, 00:12:07.046 "data_offset": 0, 00:12:07.046 "data_size": 63488 00:12:07.046 }, 00:12:07.046 { 00:12:07.046 "name": "BaseBdev2", 00:12:07.046 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:07.046 "is_configured": true, 00:12:07.046 "data_offset": 2048, 00:12:07.046 "data_size": 63488 00:12:07.046 } 00:12:07.046 ] 00:12:07.046 }' 00:12:07.046 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.046 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:07.046 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.047 [2024-11-18 23:07:26.287553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.047 [2024-11-18 23:07:26.287757] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:07.047 [2024-11-18 23:07:26.287827] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:07.047 request: 00:12:07.047 { 00:12:07.047 "base_bdev": "BaseBdev1", 00:12:07.047 "raid_bdev": "raid_bdev1", 00:12:07.047 "method": "bdev_raid_add_base_bdev", 00:12:07.047 "req_id": 1 00:12:07.047 } 00:12:07.047 Got JSON-RPC error response 00:12:07.047 response: 00:12:07.047 { 00:12:07.047 "code": -22, 00:12:07.047 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:07.047 } 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:07.047 23:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.987 "name": "raid_bdev1", 00:12:07.987 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:07.987 "strip_size_kb": 0, 00:12:07.987 "state": "online", 00:12:07.987 "raid_level": "raid1", 00:12:07.987 "superblock": true, 00:12:07.987 "num_base_bdevs": 2, 00:12:07.987 "num_base_bdevs_discovered": 1, 00:12:07.987 "num_base_bdevs_operational": 1, 00:12:07.987 "base_bdevs_list": [ 00:12:07.987 { 00:12:07.987 "name": null, 00:12:07.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.987 "is_configured": false, 00:12:07.987 "data_offset": 0, 00:12:07.987 "data_size": 63488 00:12:07.987 }, 00:12:07.987 { 00:12:07.987 "name": "BaseBdev2", 00:12:07.987 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:07.987 "is_configured": true, 00:12:07.987 "data_offset": 2048, 00:12:07.987 "data_size": 63488 00:12:07.987 } 00:12:07.987 ] 00:12:07.987 }' 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.987 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.557 "name": "raid_bdev1", 00:12:08.557 "uuid": "8af53e86-f951-440b-bd1f-dc4bee9f9725", 00:12:08.557 "strip_size_kb": 0, 00:12:08.557 "state": "online", 00:12:08.557 "raid_level": "raid1", 00:12:08.557 "superblock": true, 00:12:08.557 "num_base_bdevs": 2, 00:12:08.557 "num_base_bdevs_discovered": 1, 00:12:08.557 "num_base_bdevs_operational": 1, 00:12:08.557 "base_bdevs_list": [ 00:12:08.557 { 00:12:08.557 "name": null, 00:12:08.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.557 "is_configured": false, 00:12:08.557 "data_offset": 0, 00:12:08.557 "data_size": 63488 00:12:08.557 }, 00:12:08.557 { 00:12:08.557 "name": "BaseBdev2", 00:12:08.557 "uuid": "22830500-c6dc-591a-8337-92fd8ff13959", 00:12:08.557 "is_configured": true, 00:12:08.557 "data_offset": 2048, 00:12:08.557 "data_size": 63488 00:12:08.557 } 00:12:08.557 ] 00:12:08.557 }' 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:08.557 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87459 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87459 ']' 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87459 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87459 00:12:08.817 killing process with pid 87459 00:12:08.817 Received shutdown signal, test time was about 16.738020 seconds 00:12:08.817 00:12:08.817 Latency(us) 00:12:08.817 [2024-11-18T23:07:28.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.817 [2024-11-18T23:07:28.195Z] =================================================================================================================== 00:12:08.817 [2024-11-18T23:07:28.195Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87459' 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87459 00:12:08.817 23:07:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87459 00:12:08.817 [2024-11-18 23:07:27.980754] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.818 [2024-11-18 23:07:27.980897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.818 [2024-11-18 23:07:27.980968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.818 [2024-11-18 23:07:27.980980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:08.818 [2024-11-18 23:07:28.006571] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:09.078 ************************************ 00:12:09.078 END TEST raid_rebuild_test_sb_io 00:12:09.078 ************************************ 00:12:09.078 00:12:09.078 real 0m18.625s 00:12:09.078 user 0m24.835s 00:12:09.078 sys 0m2.113s 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.078 23:07:28 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:09.078 23:07:28 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:09.078 23:07:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:09.078 23:07:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.078 23:07:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.078 ************************************ 00:12:09.078 START TEST raid_rebuild_test 00:12:09.078 ************************************ 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88135 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88135 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88135 ']' 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.078 23:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.078 [2024-11-18 23:07:28.430883] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:09.078 [2024-11-18 23:07:28.431115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88135 ] 00:12:09.078 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:09.078 Zero copy mechanism will not be used. 00:12:09.339 [2024-11-18 23:07:28.592496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.339 [2024-11-18 23:07:28.639493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.339 [2024-11-18 23:07:28.682990] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.339 [2024-11-18 23:07:28.683103] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.932 BaseBdev1_malloc 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.932 [2024-11-18 23:07:29.254038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:09.932 [2024-11-18 23:07:29.254190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.932 [2024-11-18 23:07:29.254251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:09.932 [2024-11-18 23:07:29.254301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.932 [2024-11-18 23:07:29.256425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.932 [2024-11-18 23:07:29.256495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:09.932 BaseBdev1 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.932 BaseBdev2_malloc 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.932 [2024-11-18 23:07:29.291275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:09.932 [2024-11-18 23:07:29.291430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.932 [2024-11-18 23:07:29.291468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:09.932 [2024-11-18 23:07:29.291500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.932 [2024-11-18 23:07:29.293663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.932 [2024-11-18 23:07:29.293736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:09.932 BaseBdev2 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.932 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.193 BaseBdev3_malloc 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.193 [2024-11-18 23:07:29.320171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:10.193 [2024-11-18 23:07:29.320301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.193 [2024-11-18 23:07:29.320362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:10.193 [2024-11-18 23:07:29.320394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.193 [2024-11-18 23:07:29.322433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.193 [2024-11-18 23:07:29.322499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:10.193 BaseBdev3 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.193 BaseBdev4_malloc 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.193 [2024-11-18 23:07:29.348932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:10.193 [2024-11-18 23:07:29.349069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.193 [2024-11-18 23:07:29.349110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:10.193 [2024-11-18 23:07:29.349136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.193 [2024-11-18 23:07:29.351161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.193 [2024-11-18 23:07:29.351233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:10.193 BaseBdev4 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.193 spare_malloc 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.193 spare_delay 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.193 [2024-11-18 23:07:29.389662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:10.193 [2024-11-18 23:07:29.389780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.193 [2024-11-18 23:07:29.389835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:10.193 [2024-11-18 23:07:29.389863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.193 [2024-11-18 23:07:29.391953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.193 [2024-11-18 23:07:29.392025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:10.193 spare 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.193 [2024-11-18 23:07:29.401727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.193 [2024-11-18 23:07:29.403570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.193 [2024-11-18 23:07:29.403681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.193 [2024-11-18 23:07:29.403746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:10.193 [2024-11-18 23:07:29.403873] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:10.193 [2024-11-18 23:07:29.403923] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:10.193 [2024-11-18 23:07:29.404179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:10.193 [2024-11-18 23:07:29.404370] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:10.193 [2024-11-18 23:07:29.404419] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:10.193 [2024-11-18 23:07:29.404580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.193 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.194 "name": "raid_bdev1", 00:12:10.194 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:10.194 "strip_size_kb": 0, 00:12:10.194 "state": "online", 00:12:10.194 "raid_level": "raid1", 00:12:10.194 "superblock": false, 00:12:10.194 "num_base_bdevs": 4, 00:12:10.194 "num_base_bdevs_discovered": 4, 00:12:10.194 "num_base_bdevs_operational": 4, 00:12:10.194 "base_bdevs_list": [ 00:12:10.194 { 00:12:10.194 "name": "BaseBdev1", 00:12:10.194 "uuid": "2de620c0-fc5e-5151-88f4-cf41da025f15", 00:12:10.194 "is_configured": true, 00:12:10.194 "data_offset": 0, 00:12:10.194 "data_size": 65536 00:12:10.194 }, 00:12:10.194 { 00:12:10.194 "name": "BaseBdev2", 00:12:10.194 "uuid": "13823e0a-ec05-541b-aaae-db103ae303be", 00:12:10.194 "is_configured": true, 00:12:10.194 "data_offset": 0, 00:12:10.194 "data_size": 65536 00:12:10.194 }, 00:12:10.194 { 00:12:10.194 "name": "BaseBdev3", 00:12:10.194 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:10.194 "is_configured": true, 00:12:10.194 "data_offset": 0, 00:12:10.194 "data_size": 65536 00:12:10.194 }, 00:12:10.194 { 00:12:10.194 "name": "BaseBdev4", 00:12:10.194 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:10.194 "is_configured": true, 00:12:10.194 "data_offset": 0, 00:12:10.194 "data_size": 65536 00:12:10.194 } 00:12:10.194 ] 00:12:10.194 }' 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.194 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.764 [2024-11-18 23:07:29.857267] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:10.764 23:07:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:10.764 [2024-11-18 23:07:30.136547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:11.027 /dev/nbd0 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.027 1+0 records in 00:12:11.027 1+0 records out 00:12:11.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580408 s, 7.1 MB/s 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:11.027 23:07:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:16.313 65536+0 records in 00:12:16.313 65536+0 records out 00:12:16.313 33554432 bytes (34 MB, 32 MiB) copied, 4.9233 s, 6.8 MB/s 00:12:16.313 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:16.313 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:16.313 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:16.313 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:16.313 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:16.313 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.313 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:16.313 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:16.313 [2024-11-18 23:07:35.351749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.314 [2024-11-18 23:07:35.367733] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.314 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.315 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.315 "name": "raid_bdev1", 00:12:16.315 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:16.315 "strip_size_kb": 0, 00:12:16.315 "state": "online", 00:12:16.315 "raid_level": "raid1", 00:12:16.316 "superblock": false, 00:12:16.316 "num_base_bdevs": 4, 00:12:16.316 "num_base_bdevs_discovered": 3, 00:12:16.316 "num_base_bdevs_operational": 3, 00:12:16.316 "base_bdevs_list": [ 00:12:16.316 { 00:12:16.316 "name": null, 00:12:16.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.316 "is_configured": false, 00:12:16.316 "data_offset": 0, 00:12:16.316 "data_size": 65536 00:12:16.316 }, 00:12:16.316 { 00:12:16.316 "name": "BaseBdev2", 00:12:16.316 "uuid": "13823e0a-ec05-541b-aaae-db103ae303be", 00:12:16.316 "is_configured": true, 00:12:16.316 "data_offset": 0, 00:12:16.316 "data_size": 65536 00:12:16.316 }, 00:12:16.316 { 00:12:16.316 "name": "BaseBdev3", 00:12:16.316 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:16.316 "is_configured": true, 00:12:16.316 "data_offset": 0, 00:12:16.316 "data_size": 65536 00:12:16.316 }, 00:12:16.316 { 00:12:16.316 "name": "BaseBdev4", 00:12:16.316 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:16.316 "is_configured": true, 00:12:16.317 "data_offset": 0, 00:12:16.317 "data_size": 65536 00:12:16.317 } 00:12:16.317 ] 00:12:16.317 }' 00:12:16.317 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.317 23:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.577 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:16.577 23:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.577 23:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.577 [2024-11-18 23:07:35.807146] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:16.577 [2024-11-18 23:07:35.810533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:16.577 [2024-11-18 23:07:35.812440] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:16.577 23:07:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.577 23:07:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:17.546 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.546 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.546 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.546 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.546 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.546 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.546 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.546 23:07:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.546 23:07:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.546 23:07:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.546 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.546 "name": "raid_bdev1", 00:12:17.546 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:17.546 "strip_size_kb": 0, 00:12:17.546 "state": "online", 00:12:17.546 "raid_level": "raid1", 00:12:17.547 "superblock": false, 00:12:17.547 "num_base_bdevs": 4, 00:12:17.547 "num_base_bdevs_discovered": 4, 00:12:17.547 "num_base_bdevs_operational": 4, 00:12:17.547 "process": { 00:12:17.547 "type": "rebuild", 00:12:17.547 "target": "spare", 00:12:17.547 "progress": { 00:12:17.547 "blocks": 20480, 00:12:17.547 "percent": 31 00:12:17.547 } 00:12:17.547 }, 00:12:17.547 "base_bdevs_list": [ 00:12:17.547 { 00:12:17.547 "name": "spare", 00:12:17.547 "uuid": "d43376a3-8400-54ef-89b7-b358af6e32d9", 00:12:17.547 "is_configured": true, 00:12:17.547 "data_offset": 0, 00:12:17.547 "data_size": 65536 00:12:17.547 }, 00:12:17.547 { 00:12:17.547 "name": "BaseBdev2", 00:12:17.547 "uuid": "13823e0a-ec05-541b-aaae-db103ae303be", 00:12:17.547 "is_configured": true, 00:12:17.547 "data_offset": 0, 00:12:17.547 "data_size": 65536 00:12:17.547 }, 00:12:17.547 { 00:12:17.547 "name": "BaseBdev3", 00:12:17.547 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:17.547 "is_configured": true, 00:12:17.547 "data_offset": 0, 00:12:17.547 "data_size": 65536 00:12:17.547 }, 00:12:17.547 { 00:12:17.547 "name": "BaseBdev4", 00:12:17.547 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:17.547 "is_configured": true, 00:12:17.547 "data_offset": 0, 00:12:17.547 "data_size": 65536 00:12:17.547 } 00:12:17.547 ] 00:12:17.547 }' 00:12:17.547 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.547 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.547 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.806 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.806 23:07:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:17.806 23:07:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.806 23:07:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.806 [2024-11-18 23:07:36.931395] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.806 [2024-11-18 23:07:37.017414] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:17.806 [2024-11-18 23:07:37.017509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.806 [2024-11-18 23:07:37.017533] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.806 [2024-11-18 23:07:37.017541] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.806 "name": "raid_bdev1", 00:12:17.806 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:17.806 "strip_size_kb": 0, 00:12:17.806 "state": "online", 00:12:17.806 "raid_level": "raid1", 00:12:17.806 "superblock": false, 00:12:17.806 "num_base_bdevs": 4, 00:12:17.806 "num_base_bdevs_discovered": 3, 00:12:17.806 "num_base_bdevs_operational": 3, 00:12:17.806 "base_bdevs_list": [ 00:12:17.806 { 00:12:17.806 "name": null, 00:12:17.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.806 "is_configured": false, 00:12:17.806 "data_offset": 0, 00:12:17.806 "data_size": 65536 00:12:17.806 }, 00:12:17.806 { 00:12:17.806 "name": "BaseBdev2", 00:12:17.806 "uuid": "13823e0a-ec05-541b-aaae-db103ae303be", 00:12:17.806 "is_configured": true, 00:12:17.806 "data_offset": 0, 00:12:17.806 "data_size": 65536 00:12:17.806 }, 00:12:17.806 { 00:12:17.806 "name": "BaseBdev3", 00:12:17.806 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:17.806 "is_configured": true, 00:12:17.806 "data_offset": 0, 00:12:17.806 "data_size": 65536 00:12:17.806 }, 00:12:17.806 { 00:12:17.806 "name": "BaseBdev4", 00:12:17.806 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:17.806 "is_configured": true, 00:12:17.806 "data_offset": 0, 00:12:17.806 "data_size": 65536 00:12:17.806 } 00:12:17.806 ] 00:12:17.806 }' 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.806 23:07:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.376 "name": "raid_bdev1", 00:12:18.376 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:18.376 "strip_size_kb": 0, 00:12:18.376 "state": "online", 00:12:18.376 "raid_level": "raid1", 00:12:18.376 "superblock": false, 00:12:18.376 "num_base_bdevs": 4, 00:12:18.376 "num_base_bdevs_discovered": 3, 00:12:18.376 "num_base_bdevs_operational": 3, 00:12:18.376 "base_bdevs_list": [ 00:12:18.376 { 00:12:18.376 "name": null, 00:12:18.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.376 "is_configured": false, 00:12:18.376 "data_offset": 0, 00:12:18.376 "data_size": 65536 00:12:18.376 }, 00:12:18.376 { 00:12:18.376 "name": "BaseBdev2", 00:12:18.376 "uuid": "13823e0a-ec05-541b-aaae-db103ae303be", 00:12:18.376 "is_configured": true, 00:12:18.376 "data_offset": 0, 00:12:18.376 "data_size": 65536 00:12:18.376 }, 00:12:18.376 { 00:12:18.376 "name": "BaseBdev3", 00:12:18.376 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:18.376 "is_configured": true, 00:12:18.376 "data_offset": 0, 00:12:18.376 "data_size": 65536 00:12:18.376 }, 00:12:18.376 { 00:12:18.376 "name": "BaseBdev4", 00:12:18.376 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:18.376 "is_configured": true, 00:12:18.376 "data_offset": 0, 00:12:18.376 "data_size": 65536 00:12:18.376 } 00:12:18.376 ] 00:12:18.376 }' 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.376 [2024-11-18 23:07:37.648661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:18.376 [2024-11-18 23:07:37.651916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:18.376 [2024-11-18 23:07:37.653741] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.376 23:07:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:19.313 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.313 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.313 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.313 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.313 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.313 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.313 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.313 23:07:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.313 23:07:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.313 23:07:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.572 "name": "raid_bdev1", 00:12:19.572 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:19.572 "strip_size_kb": 0, 00:12:19.572 "state": "online", 00:12:19.572 "raid_level": "raid1", 00:12:19.572 "superblock": false, 00:12:19.572 "num_base_bdevs": 4, 00:12:19.572 "num_base_bdevs_discovered": 4, 00:12:19.572 "num_base_bdevs_operational": 4, 00:12:19.572 "process": { 00:12:19.572 "type": "rebuild", 00:12:19.572 "target": "spare", 00:12:19.572 "progress": { 00:12:19.572 "blocks": 20480, 00:12:19.572 "percent": 31 00:12:19.572 } 00:12:19.572 }, 00:12:19.572 "base_bdevs_list": [ 00:12:19.572 { 00:12:19.572 "name": "spare", 00:12:19.572 "uuid": "d43376a3-8400-54ef-89b7-b358af6e32d9", 00:12:19.572 "is_configured": true, 00:12:19.572 "data_offset": 0, 00:12:19.572 "data_size": 65536 00:12:19.572 }, 00:12:19.572 { 00:12:19.572 "name": "BaseBdev2", 00:12:19.572 "uuid": "13823e0a-ec05-541b-aaae-db103ae303be", 00:12:19.572 "is_configured": true, 00:12:19.572 "data_offset": 0, 00:12:19.572 "data_size": 65536 00:12:19.572 }, 00:12:19.572 { 00:12:19.572 "name": "BaseBdev3", 00:12:19.572 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:19.572 "is_configured": true, 00:12:19.572 "data_offset": 0, 00:12:19.572 "data_size": 65536 00:12:19.572 }, 00:12:19.572 { 00:12:19.572 "name": "BaseBdev4", 00:12:19.572 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:19.572 "is_configured": true, 00:12:19.572 "data_offset": 0, 00:12:19.572 "data_size": 65536 00:12:19.572 } 00:12:19.572 ] 00:12:19.572 }' 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.572 [2024-11-18 23:07:38.820507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.572 [2024-11-18 23:07:38.857733] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.572 "name": "raid_bdev1", 00:12:19.572 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:19.572 "strip_size_kb": 0, 00:12:19.572 "state": "online", 00:12:19.572 "raid_level": "raid1", 00:12:19.572 "superblock": false, 00:12:19.572 "num_base_bdevs": 4, 00:12:19.572 "num_base_bdevs_discovered": 3, 00:12:19.572 "num_base_bdevs_operational": 3, 00:12:19.572 "process": { 00:12:19.572 "type": "rebuild", 00:12:19.572 "target": "spare", 00:12:19.572 "progress": { 00:12:19.572 "blocks": 24576, 00:12:19.572 "percent": 37 00:12:19.572 } 00:12:19.572 }, 00:12:19.572 "base_bdevs_list": [ 00:12:19.572 { 00:12:19.572 "name": "spare", 00:12:19.572 "uuid": "d43376a3-8400-54ef-89b7-b358af6e32d9", 00:12:19.572 "is_configured": true, 00:12:19.572 "data_offset": 0, 00:12:19.572 "data_size": 65536 00:12:19.572 }, 00:12:19.572 { 00:12:19.572 "name": null, 00:12:19.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.572 "is_configured": false, 00:12:19.572 "data_offset": 0, 00:12:19.572 "data_size": 65536 00:12:19.572 }, 00:12:19.572 { 00:12:19.572 "name": "BaseBdev3", 00:12:19.572 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:19.572 "is_configured": true, 00:12:19.572 "data_offset": 0, 00:12:19.572 "data_size": 65536 00:12:19.572 }, 00:12:19.572 { 00:12:19.572 "name": "BaseBdev4", 00:12:19.572 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:19.572 "is_configured": true, 00:12:19.572 "data_offset": 0, 00:12:19.572 "data_size": 65536 00:12:19.572 } 00:12:19.572 ] 00:12:19.572 }' 00:12:19.572 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.832 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.832 23:07:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=355 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.832 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.832 "name": "raid_bdev1", 00:12:19.832 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:19.832 "strip_size_kb": 0, 00:12:19.832 "state": "online", 00:12:19.832 "raid_level": "raid1", 00:12:19.832 "superblock": false, 00:12:19.832 "num_base_bdevs": 4, 00:12:19.832 "num_base_bdevs_discovered": 3, 00:12:19.832 "num_base_bdevs_operational": 3, 00:12:19.832 "process": { 00:12:19.832 "type": "rebuild", 00:12:19.832 "target": "spare", 00:12:19.832 "progress": { 00:12:19.832 "blocks": 26624, 00:12:19.832 "percent": 40 00:12:19.832 } 00:12:19.832 }, 00:12:19.832 "base_bdevs_list": [ 00:12:19.832 { 00:12:19.832 "name": "spare", 00:12:19.833 "uuid": "d43376a3-8400-54ef-89b7-b358af6e32d9", 00:12:19.833 "is_configured": true, 00:12:19.833 "data_offset": 0, 00:12:19.833 "data_size": 65536 00:12:19.833 }, 00:12:19.833 { 00:12:19.833 "name": null, 00:12:19.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.833 "is_configured": false, 00:12:19.833 "data_offset": 0, 00:12:19.833 "data_size": 65536 00:12:19.833 }, 00:12:19.833 { 00:12:19.833 "name": "BaseBdev3", 00:12:19.833 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:19.833 "is_configured": true, 00:12:19.833 "data_offset": 0, 00:12:19.833 "data_size": 65536 00:12:19.833 }, 00:12:19.833 { 00:12:19.833 "name": "BaseBdev4", 00:12:19.833 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:19.833 "is_configured": true, 00:12:19.833 "data_offset": 0, 00:12:19.833 "data_size": 65536 00:12:19.833 } 00:12:19.833 ] 00:12:19.833 }' 00:12:19.833 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.833 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.833 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.833 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.833 23:07:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:20.772 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:20.772 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.772 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.772 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.772 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.772 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.772 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.772 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.772 23:07:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.772 23:07:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.031 23:07:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.031 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.031 "name": "raid_bdev1", 00:12:21.031 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:21.031 "strip_size_kb": 0, 00:12:21.031 "state": "online", 00:12:21.031 "raid_level": "raid1", 00:12:21.031 "superblock": false, 00:12:21.031 "num_base_bdevs": 4, 00:12:21.031 "num_base_bdevs_discovered": 3, 00:12:21.031 "num_base_bdevs_operational": 3, 00:12:21.031 "process": { 00:12:21.031 "type": "rebuild", 00:12:21.031 "target": "spare", 00:12:21.031 "progress": { 00:12:21.031 "blocks": 49152, 00:12:21.031 "percent": 75 00:12:21.031 } 00:12:21.031 }, 00:12:21.031 "base_bdevs_list": [ 00:12:21.031 { 00:12:21.031 "name": "spare", 00:12:21.031 "uuid": "d43376a3-8400-54ef-89b7-b358af6e32d9", 00:12:21.032 "is_configured": true, 00:12:21.032 "data_offset": 0, 00:12:21.032 "data_size": 65536 00:12:21.032 }, 00:12:21.032 { 00:12:21.032 "name": null, 00:12:21.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.032 "is_configured": false, 00:12:21.032 "data_offset": 0, 00:12:21.032 "data_size": 65536 00:12:21.032 }, 00:12:21.032 { 00:12:21.032 "name": "BaseBdev3", 00:12:21.032 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:21.032 "is_configured": true, 00:12:21.032 "data_offset": 0, 00:12:21.032 "data_size": 65536 00:12:21.032 }, 00:12:21.032 { 00:12:21.032 "name": "BaseBdev4", 00:12:21.032 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:21.032 "is_configured": true, 00:12:21.032 "data_offset": 0, 00:12:21.032 "data_size": 65536 00:12:21.032 } 00:12:21.032 ] 00:12:21.032 }' 00:12:21.032 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.032 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.032 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.032 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.032 23:07:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:21.603 [2024-11-18 23:07:40.864558] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:21.603 [2024-11-18 23:07:40.864627] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:21.603 [2024-11-18 23:07:40.864678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.173 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.173 "name": "raid_bdev1", 00:12:22.173 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:22.173 "strip_size_kb": 0, 00:12:22.173 "state": "online", 00:12:22.173 "raid_level": "raid1", 00:12:22.173 "superblock": false, 00:12:22.173 "num_base_bdevs": 4, 00:12:22.173 "num_base_bdevs_discovered": 3, 00:12:22.173 "num_base_bdevs_operational": 3, 00:12:22.173 "base_bdevs_list": [ 00:12:22.173 { 00:12:22.173 "name": "spare", 00:12:22.173 "uuid": "d43376a3-8400-54ef-89b7-b358af6e32d9", 00:12:22.173 "is_configured": true, 00:12:22.173 "data_offset": 0, 00:12:22.173 "data_size": 65536 00:12:22.174 }, 00:12:22.174 { 00:12:22.174 "name": null, 00:12:22.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.174 "is_configured": false, 00:12:22.174 "data_offset": 0, 00:12:22.174 "data_size": 65536 00:12:22.174 }, 00:12:22.174 { 00:12:22.174 "name": "BaseBdev3", 00:12:22.174 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:22.174 "is_configured": true, 00:12:22.174 "data_offset": 0, 00:12:22.174 "data_size": 65536 00:12:22.174 }, 00:12:22.174 { 00:12:22.174 "name": "BaseBdev4", 00:12:22.174 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:22.174 "is_configured": true, 00:12:22.174 "data_offset": 0, 00:12:22.174 "data_size": 65536 00:12:22.174 } 00:12:22.174 ] 00:12:22.174 }' 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.174 "name": "raid_bdev1", 00:12:22.174 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:22.174 "strip_size_kb": 0, 00:12:22.174 "state": "online", 00:12:22.174 "raid_level": "raid1", 00:12:22.174 "superblock": false, 00:12:22.174 "num_base_bdevs": 4, 00:12:22.174 "num_base_bdevs_discovered": 3, 00:12:22.174 "num_base_bdevs_operational": 3, 00:12:22.174 "base_bdevs_list": [ 00:12:22.174 { 00:12:22.174 "name": "spare", 00:12:22.174 "uuid": "d43376a3-8400-54ef-89b7-b358af6e32d9", 00:12:22.174 "is_configured": true, 00:12:22.174 "data_offset": 0, 00:12:22.174 "data_size": 65536 00:12:22.174 }, 00:12:22.174 { 00:12:22.174 "name": null, 00:12:22.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.174 "is_configured": false, 00:12:22.174 "data_offset": 0, 00:12:22.174 "data_size": 65536 00:12:22.174 }, 00:12:22.174 { 00:12:22.174 "name": "BaseBdev3", 00:12:22.174 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:22.174 "is_configured": true, 00:12:22.174 "data_offset": 0, 00:12:22.174 "data_size": 65536 00:12:22.174 }, 00:12:22.174 { 00:12:22.174 "name": "BaseBdev4", 00:12:22.174 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:22.174 "is_configured": true, 00:12:22.174 "data_offset": 0, 00:12:22.174 "data_size": 65536 00:12:22.174 } 00:12:22.174 ] 00:12:22.174 }' 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:22.174 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.434 "name": "raid_bdev1", 00:12:22.434 "uuid": "fffa2e47-6e21-4873-9a61-567530fd98ff", 00:12:22.434 "strip_size_kb": 0, 00:12:22.434 "state": "online", 00:12:22.434 "raid_level": "raid1", 00:12:22.434 "superblock": false, 00:12:22.434 "num_base_bdevs": 4, 00:12:22.434 "num_base_bdevs_discovered": 3, 00:12:22.434 "num_base_bdevs_operational": 3, 00:12:22.434 "base_bdevs_list": [ 00:12:22.434 { 00:12:22.434 "name": "spare", 00:12:22.434 "uuid": "d43376a3-8400-54ef-89b7-b358af6e32d9", 00:12:22.434 "is_configured": true, 00:12:22.434 "data_offset": 0, 00:12:22.434 "data_size": 65536 00:12:22.434 }, 00:12:22.434 { 00:12:22.434 "name": null, 00:12:22.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.434 "is_configured": false, 00:12:22.434 "data_offset": 0, 00:12:22.434 "data_size": 65536 00:12:22.434 }, 00:12:22.434 { 00:12:22.434 "name": "BaseBdev3", 00:12:22.434 "uuid": "5ae5a27b-721e-50d3-b826-29043de8d548", 00:12:22.434 "is_configured": true, 00:12:22.434 "data_offset": 0, 00:12:22.434 "data_size": 65536 00:12:22.434 }, 00:12:22.434 { 00:12:22.434 "name": "BaseBdev4", 00:12:22.434 "uuid": "244a4c30-a3b7-52f5-950e-b0e374fb6e3c", 00:12:22.434 "is_configured": true, 00:12:22.434 "data_offset": 0, 00:12:22.434 "data_size": 65536 00:12:22.434 } 00:12:22.434 ] 00:12:22.434 }' 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.434 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.693 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:22.693 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.693 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.693 [2024-11-18 23:07:41.978060] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.693 [2024-11-18 23:07:41.978139] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.693 [2024-11-18 23:07:41.978253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.693 [2024-11-18 23:07:41.978371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.693 [2024-11-18 23:07:41.978424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:22.693 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.693 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.693 23:07:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:22.693 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.693 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.693 23:07:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.693 23:07:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:22.693 23:07:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:22.693 23:07:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:22.694 23:07:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:22.694 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.694 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:22.694 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.694 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:22.694 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.694 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:22.694 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.694 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.694 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:22.954 /dev/nbd0 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.954 1+0 records in 00:12:22.954 1+0 records out 00:12:22.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576037 s, 7.1 MB/s 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.954 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:23.214 /dev/nbd1 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.214 1+0 records in 00:12:23.214 1+0 records out 00:12:23.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045466 s, 9.0 MB/s 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:23.214 23:07:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.475 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:23.735 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:23.735 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:23.735 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:23.735 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.735 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.735 23:07:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88135 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88135 ']' 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88135 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88135 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88135' 00:12:23.735 killing process with pid 88135 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88135 00:12:23.735 Received shutdown signal, test time was about 60.000000 seconds 00:12:23.735 00:12:23.735 Latency(us) 00:12:23.735 [2024-11-18T23:07:43.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.735 [2024-11-18T23:07:43.113Z] =================================================================================================================== 00:12:23.735 [2024-11-18T23:07:43.113Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:23.735 [2024-11-18 23:07:43.037698] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.735 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88135 00:12:23.735 [2024-11-18 23:07:43.086785] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:23.996 23:07:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:23.996 00:12:23.996 real 0m14.994s 00:12:23.996 user 0m17.035s 00:12:23.996 sys 0m3.226s 00:12:23.996 ************************************ 00:12:23.996 END TEST raid_rebuild_test 00:12:23.996 ************************************ 00:12:23.996 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.996 23:07:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.257 23:07:43 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:24.257 23:07:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:24.257 23:07:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.257 23:07:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.257 ************************************ 00:12:24.257 START TEST raid_rebuild_test_sb 00:12:24.257 ************************************ 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88559 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88559 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88559 ']' 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:24.257 23:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.257 [2024-11-18 23:07:43.500414] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:24.257 [2024-11-18 23:07:43.500616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88559 ] 00:12:24.257 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:24.257 Zero copy mechanism will not be used. 00:12:24.518 [2024-11-18 23:07:43.660709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.518 [2024-11-18 23:07:43.707032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.518 [2024-11-18 23:07:43.749849] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.518 [2024-11-18 23:07:43.749963] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.088 BaseBdev1_malloc 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.088 [2024-11-18 23:07:44.344055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:25.088 [2024-11-18 23:07:44.344164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.088 [2024-11-18 23:07:44.344219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:25.088 [2024-11-18 23:07:44.344255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.088 [2024-11-18 23:07:44.346353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.088 [2024-11-18 23:07:44.346424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:25.088 BaseBdev1 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.088 BaseBdev2_malloc 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.088 [2024-11-18 23:07:44.378061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:25.088 [2024-11-18 23:07:44.378165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.088 [2024-11-18 23:07:44.378211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:25.088 [2024-11-18 23:07:44.378244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.088 [2024-11-18 23:07:44.380650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.088 [2024-11-18 23:07:44.380732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:25.088 BaseBdev2 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.088 BaseBdev3_malloc 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.088 [2024-11-18 23:07:44.402841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:25.088 [2024-11-18 23:07:44.402890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.088 [2024-11-18 23:07:44.402915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:25.088 [2024-11-18 23:07:44.402923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.088 [2024-11-18 23:07:44.404965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.088 BaseBdev3 00:12:25.088 [2024-11-18 23:07:44.405056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.088 BaseBdev4_malloc 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.088 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.088 [2024-11-18 23:07:44.423540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:25.089 [2024-11-18 23:07:44.423634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.089 [2024-11-18 23:07:44.423677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:25.089 [2024-11-18 23:07:44.423705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.089 [2024-11-18 23:07:44.425697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.089 [2024-11-18 23:07:44.425781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:25.089 BaseBdev4 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.089 spare_malloc 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.089 spare_delay 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.089 [2024-11-18 23:07:44.452114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:25.089 [2024-11-18 23:07:44.452207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.089 [2024-11-18 23:07:44.452255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:25.089 [2024-11-18 23:07:44.452297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.089 [2024-11-18 23:07:44.454307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.089 [2024-11-18 23:07:44.454371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:25.089 spare 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.089 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.089 [2024-11-18 23:07:44.464179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.351 [2024-11-18 23:07:44.465985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.351 [2024-11-18 23:07:44.466096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.351 [2024-11-18 23:07:44.466162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:25.351 [2024-11-18 23:07:44.466376] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:25.351 [2024-11-18 23:07:44.466424] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.351 [2024-11-18 23:07:44.466672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:25.351 [2024-11-18 23:07:44.466844] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:25.351 [2024-11-18 23:07:44.466896] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:25.351 [2024-11-18 23:07:44.467025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.351 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.351 "name": "raid_bdev1", 00:12:25.351 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:25.351 "strip_size_kb": 0, 00:12:25.351 "state": "online", 00:12:25.351 "raid_level": "raid1", 00:12:25.351 "superblock": true, 00:12:25.351 "num_base_bdevs": 4, 00:12:25.351 "num_base_bdevs_discovered": 4, 00:12:25.351 "num_base_bdevs_operational": 4, 00:12:25.351 "base_bdevs_list": [ 00:12:25.351 { 00:12:25.351 "name": "BaseBdev1", 00:12:25.351 "uuid": "5dbee176-2b59-58b2-b417-2481e6e74030", 00:12:25.351 "is_configured": true, 00:12:25.351 "data_offset": 2048, 00:12:25.351 "data_size": 63488 00:12:25.351 }, 00:12:25.351 { 00:12:25.351 "name": "BaseBdev2", 00:12:25.351 "uuid": "2f555864-cf7a-5695-897f-10008ac3cc87", 00:12:25.351 "is_configured": true, 00:12:25.351 "data_offset": 2048, 00:12:25.351 "data_size": 63488 00:12:25.351 }, 00:12:25.351 { 00:12:25.351 "name": "BaseBdev3", 00:12:25.351 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:25.352 "is_configured": true, 00:12:25.352 "data_offset": 2048, 00:12:25.352 "data_size": 63488 00:12:25.352 }, 00:12:25.352 { 00:12:25.352 "name": "BaseBdev4", 00:12:25.352 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:25.352 "is_configured": true, 00:12:25.352 "data_offset": 2048, 00:12:25.352 "data_size": 63488 00:12:25.352 } 00:12:25.352 ] 00:12:25.352 }' 00:12:25.352 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.352 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.612 [2024-11-18 23:07:44.867706] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:25.612 23:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:25.871 [2024-11-18 23:07:45.139388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:25.871 /dev/nbd0 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.871 1+0 records in 00:12:25.871 1+0 records out 00:12:25.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461752 s, 8.9 MB/s 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:25.871 23:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:32.452 63488+0 records in 00:12:32.452 63488+0 records out 00:12:32.452 32505856 bytes (33 MB, 31 MiB) copied, 5.45186 s, 6.0 MB/s 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:32.452 [2024-11-18 23:07:50.843692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:32.452 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.453 [2024-11-18 23:07:50.879612] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.453 "name": "raid_bdev1", 00:12:32.453 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:32.453 "strip_size_kb": 0, 00:12:32.453 "state": "online", 00:12:32.453 "raid_level": "raid1", 00:12:32.453 "superblock": true, 00:12:32.453 "num_base_bdevs": 4, 00:12:32.453 "num_base_bdevs_discovered": 3, 00:12:32.453 "num_base_bdevs_operational": 3, 00:12:32.453 "base_bdevs_list": [ 00:12:32.453 { 00:12:32.453 "name": null, 00:12:32.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.453 "is_configured": false, 00:12:32.453 "data_offset": 0, 00:12:32.453 "data_size": 63488 00:12:32.453 }, 00:12:32.453 { 00:12:32.453 "name": "BaseBdev2", 00:12:32.453 "uuid": "2f555864-cf7a-5695-897f-10008ac3cc87", 00:12:32.453 "is_configured": true, 00:12:32.453 "data_offset": 2048, 00:12:32.453 "data_size": 63488 00:12:32.453 }, 00:12:32.453 { 00:12:32.453 "name": "BaseBdev3", 00:12:32.453 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:32.453 "is_configured": true, 00:12:32.453 "data_offset": 2048, 00:12:32.453 "data_size": 63488 00:12:32.453 }, 00:12:32.453 { 00:12:32.453 "name": "BaseBdev4", 00:12:32.453 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:32.453 "is_configured": true, 00:12:32.453 "data_offset": 2048, 00:12:32.453 "data_size": 63488 00:12:32.453 } 00:12:32.453 ] 00:12:32.453 }' 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.453 23:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.453 23:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:32.453 23:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.453 23:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.453 [2024-11-18 23:07:51.322898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.453 [2024-11-18 23:07:51.326334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:32.453 [2024-11-18 23:07:51.328231] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:32.453 23:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.453 23:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.022 "name": "raid_bdev1", 00:12:33.022 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:33.022 "strip_size_kb": 0, 00:12:33.022 "state": "online", 00:12:33.022 "raid_level": "raid1", 00:12:33.022 "superblock": true, 00:12:33.022 "num_base_bdevs": 4, 00:12:33.022 "num_base_bdevs_discovered": 4, 00:12:33.022 "num_base_bdevs_operational": 4, 00:12:33.022 "process": { 00:12:33.022 "type": "rebuild", 00:12:33.022 "target": "spare", 00:12:33.022 "progress": { 00:12:33.022 "blocks": 20480, 00:12:33.022 "percent": 32 00:12:33.022 } 00:12:33.022 }, 00:12:33.022 "base_bdevs_list": [ 00:12:33.022 { 00:12:33.022 "name": "spare", 00:12:33.022 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:33.022 "is_configured": true, 00:12:33.022 "data_offset": 2048, 00:12:33.022 "data_size": 63488 00:12:33.022 }, 00:12:33.022 { 00:12:33.022 "name": "BaseBdev2", 00:12:33.022 "uuid": "2f555864-cf7a-5695-897f-10008ac3cc87", 00:12:33.022 "is_configured": true, 00:12:33.022 "data_offset": 2048, 00:12:33.022 "data_size": 63488 00:12:33.022 }, 00:12:33.022 { 00:12:33.022 "name": "BaseBdev3", 00:12:33.022 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:33.022 "is_configured": true, 00:12:33.022 "data_offset": 2048, 00:12:33.022 "data_size": 63488 00:12:33.022 }, 00:12:33.022 { 00:12:33.022 "name": "BaseBdev4", 00:12:33.022 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:33.022 "is_configured": true, 00:12:33.022 "data_offset": 2048, 00:12:33.022 "data_size": 63488 00:12:33.022 } 00:12:33.022 ] 00:12:33.022 }' 00:12:33.022 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.282 [2024-11-18 23:07:52.467401] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.282 [2024-11-18 23:07:52.532890] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:33.282 [2024-11-18 23:07:52.532956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.282 [2024-11-18 23:07:52.532977] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.282 [2024-11-18 23:07:52.532985] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.282 "name": "raid_bdev1", 00:12:33.282 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:33.282 "strip_size_kb": 0, 00:12:33.282 "state": "online", 00:12:33.282 "raid_level": "raid1", 00:12:33.282 "superblock": true, 00:12:33.282 "num_base_bdevs": 4, 00:12:33.282 "num_base_bdevs_discovered": 3, 00:12:33.282 "num_base_bdevs_operational": 3, 00:12:33.282 "base_bdevs_list": [ 00:12:33.282 { 00:12:33.282 "name": null, 00:12:33.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.282 "is_configured": false, 00:12:33.282 "data_offset": 0, 00:12:33.282 "data_size": 63488 00:12:33.282 }, 00:12:33.282 { 00:12:33.282 "name": "BaseBdev2", 00:12:33.282 "uuid": "2f555864-cf7a-5695-897f-10008ac3cc87", 00:12:33.282 "is_configured": true, 00:12:33.282 "data_offset": 2048, 00:12:33.282 "data_size": 63488 00:12:33.282 }, 00:12:33.282 { 00:12:33.282 "name": "BaseBdev3", 00:12:33.282 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:33.282 "is_configured": true, 00:12:33.282 "data_offset": 2048, 00:12:33.282 "data_size": 63488 00:12:33.282 }, 00:12:33.282 { 00:12:33.282 "name": "BaseBdev4", 00:12:33.282 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:33.282 "is_configured": true, 00:12:33.282 "data_offset": 2048, 00:12:33.282 "data_size": 63488 00:12:33.282 } 00:12:33.282 ] 00:12:33.282 }' 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.282 23:07:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.852 "name": "raid_bdev1", 00:12:33.852 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:33.852 "strip_size_kb": 0, 00:12:33.852 "state": "online", 00:12:33.852 "raid_level": "raid1", 00:12:33.852 "superblock": true, 00:12:33.852 "num_base_bdevs": 4, 00:12:33.852 "num_base_bdevs_discovered": 3, 00:12:33.852 "num_base_bdevs_operational": 3, 00:12:33.852 "base_bdevs_list": [ 00:12:33.852 { 00:12:33.852 "name": null, 00:12:33.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.852 "is_configured": false, 00:12:33.852 "data_offset": 0, 00:12:33.852 "data_size": 63488 00:12:33.852 }, 00:12:33.852 { 00:12:33.852 "name": "BaseBdev2", 00:12:33.852 "uuid": "2f555864-cf7a-5695-897f-10008ac3cc87", 00:12:33.852 "is_configured": true, 00:12:33.852 "data_offset": 2048, 00:12:33.852 "data_size": 63488 00:12:33.852 }, 00:12:33.852 { 00:12:33.852 "name": "BaseBdev3", 00:12:33.852 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:33.852 "is_configured": true, 00:12:33.852 "data_offset": 2048, 00:12:33.852 "data_size": 63488 00:12:33.852 }, 00:12:33.852 { 00:12:33.852 "name": "BaseBdev4", 00:12:33.852 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:33.852 "is_configured": true, 00:12:33.852 "data_offset": 2048, 00:12:33.852 "data_size": 63488 00:12:33.852 } 00:12:33.852 ] 00:12:33.852 }' 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.852 [2024-11-18 23:07:53.187869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.852 [2024-11-18 23:07:53.191147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:33.852 [2024-11-18 23:07:53.193154] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.852 23:07:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:35.232 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.232 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.232 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.232 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.232 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.232 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.232 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.232 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.232 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.232 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.232 "name": "raid_bdev1", 00:12:35.232 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:35.232 "strip_size_kb": 0, 00:12:35.232 "state": "online", 00:12:35.232 "raid_level": "raid1", 00:12:35.232 "superblock": true, 00:12:35.232 "num_base_bdevs": 4, 00:12:35.232 "num_base_bdevs_discovered": 4, 00:12:35.232 "num_base_bdevs_operational": 4, 00:12:35.232 "process": { 00:12:35.232 "type": "rebuild", 00:12:35.232 "target": "spare", 00:12:35.233 "progress": { 00:12:35.233 "blocks": 20480, 00:12:35.233 "percent": 32 00:12:35.233 } 00:12:35.233 }, 00:12:35.233 "base_bdevs_list": [ 00:12:35.233 { 00:12:35.233 "name": "spare", 00:12:35.233 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:35.233 "is_configured": true, 00:12:35.233 "data_offset": 2048, 00:12:35.233 "data_size": 63488 00:12:35.233 }, 00:12:35.233 { 00:12:35.233 "name": "BaseBdev2", 00:12:35.233 "uuid": "2f555864-cf7a-5695-897f-10008ac3cc87", 00:12:35.233 "is_configured": true, 00:12:35.233 "data_offset": 2048, 00:12:35.233 "data_size": 63488 00:12:35.233 }, 00:12:35.233 { 00:12:35.233 "name": "BaseBdev3", 00:12:35.233 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:35.233 "is_configured": true, 00:12:35.233 "data_offset": 2048, 00:12:35.233 "data_size": 63488 00:12:35.233 }, 00:12:35.233 { 00:12:35.233 "name": "BaseBdev4", 00:12:35.233 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:35.233 "is_configured": true, 00:12:35.233 "data_offset": 2048, 00:12:35.233 "data_size": 63488 00:12:35.233 } 00:12:35.233 ] 00:12:35.233 }' 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:35.233 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.233 [2024-11-18 23:07:54.363838] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:35.233 [2024-11-18 23:07:54.497020] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.233 "name": "raid_bdev1", 00:12:35.233 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:35.233 "strip_size_kb": 0, 00:12:35.233 "state": "online", 00:12:35.233 "raid_level": "raid1", 00:12:35.233 "superblock": true, 00:12:35.233 "num_base_bdevs": 4, 00:12:35.233 "num_base_bdevs_discovered": 3, 00:12:35.233 "num_base_bdevs_operational": 3, 00:12:35.233 "process": { 00:12:35.233 "type": "rebuild", 00:12:35.233 "target": "spare", 00:12:35.233 "progress": { 00:12:35.233 "blocks": 24576, 00:12:35.233 "percent": 38 00:12:35.233 } 00:12:35.233 }, 00:12:35.233 "base_bdevs_list": [ 00:12:35.233 { 00:12:35.233 "name": "spare", 00:12:35.233 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:35.233 "is_configured": true, 00:12:35.233 "data_offset": 2048, 00:12:35.233 "data_size": 63488 00:12:35.233 }, 00:12:35.233 { 00:12:35.233 "name": null, 00:12:35.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.233 "is_configured": false, 00:12:35.233 "data_offset": 0, 00:12:35.233 "data_size": 63488 00:12:35.233 }, 00:12:35.233 { 00:12:35.233 "name": "BaseBdev3", 00:12:35.233 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:35.233 "is_configured": true, 00:12:35.233 "data_offset": 2048, 00:12:35.233 "data_size": 63488 00:12:35.233 }, 00:12:35.233 { 00:12:35.233 "name": "BaseBdev4", 00:12:35.233 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:35.233 "is_configured": true, 00:12:35.233 "data_offset": 2048, 00:12:35.233 "data_size": 63488 00:12:35.233 } 00:12:35.233 ] 00:12:35.233 }' 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.233 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=370 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.494 "name": "raid_bdev1", 00:12:35.494 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:35.494 "strip_size_kb": 0, 00:12:35.494 "state": "online", 00:12:35.494 "raid_level": "raid1", 00:12:35.494 "superblock": true, 00:12:35.494 "num_base_bdevs": 4, 00:12:35.494 "num_base_bdevs_discovered": 3, 00:12:35.494 "num_base_bdevs_operational": 3, 00:12:35.494 "process": { 00:12:35.494 "type": "rebuild", 00:12:35.494 "target": "spare", 00:12:35.494 "progress": { 00:12:35.494 "blocks": 26624, 00:12:35.494 "percent": 41 00:12:35.494 } 00:12:35.494 }, 00:12:35.494 "base_bdevs_list": [ 00:12:35.494 { 00:12:35.494 "name": "spare", 00:12:35.494 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:35.494 "is_configured": true, 00:12:35.494 "data_offset": 2048, 00:12:35.494 "data_size": 63488 00:12:35.494 }, 00:12:35.494 { 00:12:35.494 "name": null, 00:12:35.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.494 "is_configured": false, 00:12:35.494 "data_offset": 0, 00:12:35.494 "data_size": 63488 00:12:35.494 }, 00:12:35.494 { 00:12:35.494 "name": "BaseBdev3", 00:12:35.494 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:35.494 "is_configured": true, 00:12:35.494 "data_offset": 2048, 00:12:35.494 "data_size": 63488 00:12:35.494 }, 00:12:35.494 { 00:12:35.494 "name": "BaseBdev4", 00:12:35.494 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:35.494 "is_configured": true, 00:12:35.494 "data_offset": 2048, 00:12:35.494 "data_size": 63488 00:12:35.494 } 00:12:35.494 ] 00:12:35.494 }' 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.494 23:07:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:36.432 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:36.432 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.432 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.432 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.432 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.432 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.432 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.432 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.432 23:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.432 23:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.432 23:07:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.692 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.692 "name": "raid_bdev1", 00:12:36.692 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:36.692 "strip_size_kb": 0, 00:12:36.692 "state": "online", 00:12:36.692 "raid_level": "raid1", 00:12:36.692 "superblock": true, 00:12:36.692 "num_base_bdevs": 4, 00:12:36.692 "num_base_bdevs_discovered": 3, 00:12:36.692 "num_base_bdevs_operational": 3, 00:12:36.692 "process": { 00:12:36.692 "type": "rebuild", 00:12:36.692 "target": "spare", 00:12:36.692 "progress": { 00:12:36.692 "blocks": 49152, 00:12:36.692 "percent": 77 00:12:36.692 } 00:12:36.692 }, 00:12:36.692 "base_bdevs_list": [ 00:12:36.692 { 00:12:36.692 "name": "spare", 00:12:36.692 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:36.692 "is_configured": true, 00:12:36.692 "data_offset": 2048, 00:12:36.692 "data_size": 63488 00:12:36.692 }, 00:12:36.692 { 00:12:36.692 "name": null, 00:12:36.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.692 "is_configured": false, 00:12:36.692 "data_offset": 0, 00:12:36.692 "data_size": 63488 00:12:36.692 }, 00:12:36.692 { 00:12:36.692 "name": "BaseBdev3", 00:12:36.692 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:36.692 "is_configured": true, 00:12:36.692 "data_offset": 2048, 00:12:36.692 "data_size": 63488 00:12:36.692 }, 00:12:36.692 { 00:12:36.692 "name": "BaseBdev4", 00:12:36.692 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:36.692 "is_configured": true, 00:12:36.692 "data_offset": 2048, 00:12:36.692 "data_size": 63488 00:12:36.692 } 00:12:36.692 ] 00:12:36.692 }' 00:12:36.692 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.692 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.692 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.692 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.692 23:07:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:37.262 [2024-11-18 23:07:56.403578] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:37.262 [2024-11-18 23:07:56.403699] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:37.262 [2024-11-18 23:07:56.403826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.831 23:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:37.831 23:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.831 23:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.831 23:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.831 23:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.831 23:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.832 23:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.832 23:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.832 23:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.832 23:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.832 23:07:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.832 23:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.832 "name": "raid_bdev1", 00:12:37.832 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:37.832 "strip_size_kb": 0, 00:12:37.832 "state": "online", 00:12:37.832 "raid_level": "raid1", 00:12:37.832 "superblock": true, 00:12:37.832 "num_base_bdevs": 4, 00:12:37.832 "num_base_bdevs_discovered": 3, 00:12:37.832 "num_base_bdevs_operational": 3, 00:12:37.832 "base_bdevs_list": [ 00:12:37.832 { 00:12:37.832 "name": "spare", 00:12:37.832 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:37.832 "is_configured": true, 00:12:37.832 "data_offset": 2048, 00:12:37.832 "data_size": 63488 00:12:37.832 }, 00:12:37.832 { 00:12:37.832 "name": null, 00:12:37.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.832 "is_configured": false, 00:12:37.832 "data_offset": 0, 00:12:37.832 "data_size": 63488 00:12:37.832 }, 00:12:37.832 { 00:12:37.832 "name": "BaseBdev3", 00:12:37.832 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:37.832 "is_configured": true, 00:12:37.832 "data_offset": 2048, 00:12:37.832 "data_size": 63488 00:12:37.832 }, 00:12:37.832 { 00:12:37.832 "name": "BaseBdev4", 00:12:37.832 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:37.832 "is_configured": true, 00:12:37.832 "data_offset": 2048, 00:12:37.832 "data_size": 63488 00:12:37.832 } 00:12:37.832 ] 00:12:37.832 }' 00:12:37.832 23:07:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.832 "name": "raid_bdev1", 00:12:37.832 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:37.832 "strip_size_kb": 0, 00:12:37.832 "state": "online", 00:12:37.832 "raid_level": "raid1", 00:12:37.832 "superblock": true, 00:12:37.832 "num_base_bdevs": 4, 00:12:37.832 "num_base_bdevs_discovered": 3, 00:12:37.832 "num_base_bdevs_operational": 3, 00:12:37.832 "base_bdevs_list": [ 00:12:37.832 { 00:12:37.832 "name": "spare", 00:12:37.832 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:37.832 "is_configured": true, 00:12:37.832 "data_offset": 2048, 00:12:37.832 "data_size": 63488 00:12:37.832 }, 00:12:37.832 { 00:12:37.832 "name": null, 00:12:37.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.832 "is_configured": false, 00:12:37.832 "data_offset": 0, 00:12:37.832 "data_size": 63488 00:12:37.832 }, 00:12:37.832 { 00:12:37.832 "name": "BaseBdev3", 00:12:37.832 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:37.832 "is_configured": true, 00:12:37.832 "data_offset": 2048, 00:12:37.832 "data_size": 63488 00:12:37.832 }, 00:12:37.832 { 00:12:37.832 "name": "BaseBdev4", 00:12:37.832 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:37.832 "is_configured": true, 00:12:37.832 "data_offset": 2048, 00:12:37.832 "data_size": 63488 00:12:37.832 } 00:12:37.832 ] 00:12:37.832 }' 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.832 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.092 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.092 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.092 "name": "raid_bdev1", 00:12:38.092 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:38.092 "strip_size_kb": 0, 00:12:38.092 "state": "online", 00:12:38.092 "raid_level": "raid1", 00:12:38.092 "superblock": true, 00:12:38.092 "num_base_bdevs": 4, 00:12:38.092 "num_base_bdevs_discovered": 3, 00:12:38.092 "num_base_bdevs_operational": 3, 00:12:38.092 "base_bdevs_list": [ 00:12:38.092 { 00:12:38.092 "name": "spare", 00:12:38.092 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:38.092 "is_configured": true, 00:12:38.092 "data_offset": 2048, 00:12:38.092 "data_size": 63488 00:12:38.092 }, 00:12:38.092 { 00:12:38.092 "name": null, 00:12:38.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.092 "is_configured": false, 00:12:38.092 "data_offset": 0, 00:12:38.092 "data_size": 63488 00:12:38.092 }, 00:12:38.092 { 00:12:38.092 "name": "BaseBdev3", 00:12:38.092 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:38.092 "is_configured": true, 00:12:38.092 "data_offset": 2048, 00:12:38.092 "data_size": 63488 00:12:38.092 }, 00:12:38.092 { 00:12:38.092 "name": "BaseBdev4", 00:12:38.092 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:38.092 "is_configured": true, 00:12:38.092 "data_offset": 2048, 00:12:38.092 "data_size": 63488 00:12:38.092 } 00:12:38.092 ] 00:12:38.092 }' 00:12:38.092 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.092 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.352 [2024-11-18 23:07:57.637171] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.352 [2024-11-18 23:07:57.637239] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.352 [2024-11-18 23:07:57.637353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.352 [2024-11-18 23:07:57.637448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.352 [2024-11-18 23:07:57.637546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:38.352 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:38.353 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.353 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:38.353 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:38.353 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:38.353 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:38.353 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:38.353 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:38.353 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:38.353 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:38.612 /dev/nbd0 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.613 1+0 records in 00:12:38.613 1+0 records out 00:12:38.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363041 s, 11.3 MB/s 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:38.613 23:07:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:38.873 /dev/nbd1 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.873 1+0 records in 00:12:38.873 1+0 records out 00:12:38.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413879 s, 9.9 MB/s 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:38.873 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.175 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.436 [2024-11-18 23:07:58.701529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:39.436 [2024-11-18 23:07:58.701639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.436 [2024-11-18 23:07:58.701677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:39.436 [2024-11-18 23:07:58.701711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.436 [2024-11-18 23:07:58.703912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.436 [2024-11-18 23:07:58.703988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:39.436 [2024-11-18 23:07:58.704109] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:39.436 [2024-11-18 23:07:58.704173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.436 [2024-11-18 23:07:58.704319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.436 [2024-11-18 23:07:58.704445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:39.436 spare 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.436 [2024-11-18 23:07:58.804360] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:39.436 [2024-11-18 23:07:58.804434] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:39.436 [2024-11-18 23:07:58.804725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:39.436 [2024-11-18 23:07:58.804898] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:39.436 [2024-11-18 23:07:58.804941] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:39.436 [2024-11-18 23:07:58.805094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.436 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.696 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.696 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.696 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.696 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.696 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.696 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.696 "name": "raid_bdev1", 00:12:39.696 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:39.696 "strip_size_kb": 0, 00:12:39.696 "state": "online", 00:12:39.696 "raid_level": "raid1", 00:12:39.696 "superblock": true, 00:12:39.696 "num_base_bdevs": 4, 00:12:39.696 "num_base_bdevs_discovered": 3, 00:12:39.696 "num_base_bdevs_operational": 3, 00:12:39.696 "base_bdevs_list": [ 00:12:39.696 { 00:12:39.696 "name": "spare", 00:12:39.696 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:39.696 "is_configured": true, 00:12:39.696 "data_offset": 2048, 00:12:39.696 "data_size": 63488 00:12:39.696 }, 00:12:39.696 { 00:12:39.696 "name": null, 00:12:39.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.696 "is_configured": false, 00:12:39.696 "data_offset": 2048, 00:12:39.696 "data_size": 63488 00:12:39.696 }, 00:12:39.696 { 00:12:39.696 "name": "BaseBdev3", 00:12:39.696 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:39.696 "is_configured": true, 00:12:39.696 "data_offset": 2048, 00:12:39.696 "data_size": 63488 00:12:39.696 }, 00:12:39.696 { 00:12:39.696 "name": "BaseBdev4", 00:12:39.696 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:39.696 "is_configured": true, 00:12:39.696 "data_offset": 2048, 00:12:39.696 "data_size": 63488 00:12:39.696 } 00:12:39.696 ] 00:12:39.696 }' 00:12:39.696 23:07:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.696 23:07:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.961 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:39.961 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.961 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:39.961 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:39.961 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.961 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.961 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.961 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.961 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.961 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.961 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.961 "name": "raid_bdev1", 00:12:39.961 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:39.962 "strip_size_kb": 0, 00:12:39.962 "state": "online", 00:12:39.962 "raid_level": "raid1", 00:12:39.962 "superblock": true, 00:12:39.962 "num_base_bdevs": 4, 00:12:39.962 "num_base_bdevs_discovered": 3, 00:12:39.962 "num_base_bdevs_operational": 3, 00:12:39.962 "base_bdevs_list": [ 00:12:39.962 { 00:12:39.962 "name": "spare", 00:12:39.962 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:39.962 "is_configured": true, 00:12:39.962 "data_offset": 2048, 00:12:39.962 "data_size": 63488 00:12:39.962 }, 00:12:39.962 { 00:12:39.962 "name": null, 00:12:39.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.962 "is_configured": false, 00:12:39.962 "data_offset": 2048, 00:12:39.962 "data_size": 63488 00:12:39.962 }, 00:12:39.962 { 00:12:39.962 "name": "BaseBdev3", 00:12:39.962 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:39.962 "is_configured": true, 00:12:39.962 "data_offset": 2048, 00:12:39.962 "data_size": 63488 00:12:39.962 }, 00:12:39.962 { 00:12:39.962 "name": "BaseBdev4", 00:12:39.962 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:39.962 "is_configured": true, 00:12:39.962 "data_offset": 2048, 00:12:39.962 "data_size": 63488 00:12:39.962 } 00:12:39.962 ] 00:12:39.962 }' 00:12:39.962 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.962 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:39.962 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.225 [2024-11-18 23:07:59.424342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.225 "name": "raid_bdev1", 00:12:40.225 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:40.225 "strip_size_kb": 0, 00:12:40.225 "state": "online", 00:12:40.225 "raid_level": "raid1", 00:12:40.225 "superblock": true, 00:12:40.225 "num_base_bdevs": 4, 00:12:40.225 "num_base_bdevs_discovered": 2, 00:12:40.225 "num_base_bdevs_operational": 2, 00:12:40.225 "base_bdevs_list": [ 00:12:40.225 { 00:12:40.225 "name": null, 00:12:40.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.225 "is_configured": false, 00:12:40.225 "data_offset": 0, 00:12:40.225 "data_size": 63488 00:12:40.225 }, 00:12:40.225 { 00:12:40.225 "name": null, 00:12:40.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.225 "is_configured": false, 00:12:40.225 "data_offset": 2048, 00:12:40.225 "data_size": 63488 00:12:40.225 }, 00:12:40.225 { 00:12:40.225 "name": "BaseBdev3", 00:12:40.225 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:40.225 "is_configured": true, 00:12:40.225 "data_offset": 2048, 00:12:40.225 "data_size": 63488 00:12:40.225 }, 00:12:40.225 { 00:12:40.225 "name": "BaseBdev4", 00:12:40.225 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:40.225 "is_configured": true, 00:12:40.225 "data_offset": 2048, 00:12:40.225 "data_size": 63488 00:12:40.225 } 00:12:40.225 ] 00:12:40.225 }' 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.225 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.795 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:40.795 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.795 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.795 [2024-11-18 23:07:59.899539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:40.795 [2024-11-18 23:07:59.899745] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:40.795 [2024-11-18 23:07:59.899816] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:40.795 [2024-11-18 23:07:59.899872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:40.795 [2024-11-18 23:07:59.903011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:40.795 [2024-11-18 23:07:59.904909] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:40.795 23:07:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.795 23:07:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.734 "name": "raid_bdev1", 00:12:41.734 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:41.734 "strip_size_kb": 0, 00:12:41.734 "state": "online", 00:12:41.734 "raid_level": "raid1", 00:12:41.734 "superblock": true, 00:12:41.734 "num_base_bdevs": 4, 00:12:41.734 "num_base_bdevs_discovered": 3, 00:12:41.734 "num_base_bdevs_operational": 3, 00:12:41.734 "process": { 00:12:41.734 "type": "rebuild", 00:12:41.734 "target": "spare", 00:12:41.734 "progress": { 00:12:41.734 "blocks": 20480, 00:12:41.734 "percent": 32 00:12:41.734 } 00:12:41.734 }, 00:12:41.734 "base_bdevs_list": [ 00:12:41.734 { 00:12:41.734 "name": "spare", 00:12:41.734 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:41.734 "is_configured": true, 00:12:41.734 "data_offset": 2048, 00:12:41.734 "data_size": 63488 00:12:41.734 }, 00:12:41.734 { 00:12:41.734 "name": null, 00:12:41.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.734 "is_configured": false, 00:12:41.734 "data_offset": 2048, 00:12:41.734 "data_size": 63488 00:12:41.734 }, 00:12:41.734 { 00:12:41.734 "name": "BaseBdev3", 00:12:41.734 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:41.734 "is_configured": true, 00:12:41.734 "data_offset": 2048, 00:12:41.734 "data_size": 63488 00:12:41.734 }, 00:12:41.734 { 00:12:41.734 "name": "BaseBdev4", 00:12:41.734 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:41.734 "is_configured": true, 00:12:41.734 "data_offset": 2048, 00:12:41.734 "data_size": 63488 00:12:41.734 } 00:12:41.734 ] 00:12:41.734 }' 00:12:41.734 23:08:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.734 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.734 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.734 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.734 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:41.734 23:08:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.734 23:08:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.734 [2024-11-18 23:08:01.039665] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:41.734 [2024-11-18 23:08:01.108871] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:41.734 [2024-11-18 23:08:01.108979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.734 [2024-11-18 23:08:01.109014] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:41.734 [2024-11-18 23:08:01.109037] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.004 "name": "raid_bdev1", 00:12:42.004 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:42.004 "strip_size_kb": 0, 00:12:42.004 "state": "online", 00:12:42.004 "raid_level": "raid1", 00:12:42.004 "superblock": true, 00:12:42.004 "num_base_bdevs": 4, 00:12:42.004 "num_base_bdevs_discovered": 2, 00:12:42.004 "num_base_bdevs_operational": 2, 00:12:42.004 "base_bdevs_list": [ 00:12:42.004 { 00:12:42.004 "name": null, 00:12:42.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.004 "is_configured": false, 00:12:42.004 "data_offset": 0, 00:12:42.004 "data_size": 63488 00:12:42.004 }, 00:12:42.004 { 00:12:42.004 "name": null, 00:12:42.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.004 "is_configured": false, 00:12:42.004 "data_offset": 2048, 00:12:42.004 "data_size": 63488 00:12:42.004 }, 00:12:42.004 { 00:12:42.004 "name": "BaseBdev3", 00:12:42.004 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:42.004 "is_configured": true, 00:12:42.004 "data_offset": 2048, 00:12:42.004 "data_size": 63488 00:12:42.004 }, 00:12:42.004 { 00:12:42.004 "name": "BaseBdev4", 00:12:42.004 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:42.004 "is_configured": true, 00:12:42.004 "data_offset": 2048, 00:12:42.004 "data_size": 63488 00:12:42.004 } 00:12:42.004 ] 00:12:42.004 }' 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.004 23:08:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.264 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:42.264 23:08:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.264 23:08:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.264 [2024-11-18 23:08:01.531835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:42.264 [2024-11-18 23:08:01.531932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.264 [2024-11-18 23:08:01.531972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:42.264 [2024-11-18 23:08:01.532002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.264 [2024-11-18 23:08:01.532462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.264 [2024-11-18 23:08:01.532524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:42.264 [2024-11-18 23:08:01.532627] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:42.264 [2024-11-18 23:08:01.532674] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:42.264 [2024-11-18 23:08:01.532721] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:42.264 [2024-11-18 23:08:01.532782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:42.264 [2024-11-18 23:08:01.535802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:42.264 spare 00:12:42.264 23:08:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.264 23:08:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:42.264 [2024-11-18 23:08:01.537663] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:43.209 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.209 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.209 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.209 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.209 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.209 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.209 23:08:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.209 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.209 23:08:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.209 23:08:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.468 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.468 "name": "raid_bdev1", 00:12:43.468 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:43.468 "strip_size_kb": 0, 00:12:43.468 "state": "online", 00:12:43.468 "raid_level": "raid1", 00:12:43.468 "superblock": true, 00:12:43.468 "num_base_bdevs": 4, 00:12:43.468 "num_base_bdevs_discovered": 3, 00:12:43.469 "num_base_bdevs_operational": 3, 00:12:43.469 "process": { 00:12:43.469 "type": "rebuild", 00:12:43.469 "target": "spare", 00:12:43.469 "progress": { 00:12:43.469 "blocks": 20480, 00:12:43.469 "percent": 32 00:12:43.469 } 00:12:43.469 }, 00:12:43.469 "base_bdevs_list": [ 00:12:43.469 { 00:12:43.469 "name": "spare", 00:12:43.469 "uuid": "91bdf986-f51b-5e6a-b2a3-39821e46c4a1", 00:12:43.469 "is_configured": true, 00:12:43.469 "data_offset": 2048, 00:12:43.469 "data_size": 63488 00:12:43.469 }, 00:12:43.469 { 00:12:43.469 "name": null, 00:12:43.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.469 "is_configured": false, 00:12:43.469 "data_offset": 2048, 00:12:43.469 "data_size": 63488 00:12:43.469 }, 00:12:43.469 { 00:12:43.469 "name": "BaseBdev3", 00:12:43.469 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:43.469 "is_configured": true, 00:12:43.469 "data_offset": 2048, 00:12:43.469 "data_size": 63488 00:12:43.469 }, 00:12:43.469 { 00:12:43.469 "name": "BaseBdev4", 00:12:43.469 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:43.469 "is_configured": true, 00:12:43.469 "data_offset": 2048, 00:12:43.469 "data_size": 63488 00:12:43.469 } 00:12:43.469 ] 00:12:43.469 }' 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.469 [2024-11-18 23:08:02.698680] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.469 [2024-11-18 23:08:02.741546] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:43.469 [2024-11-18 23:08:02.741645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.469 [2024-11-18 23:08:02.741682] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.469 [2024-11-18 23:08:02.741701] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.469 "name": "raid_bdev1", 00:12:43.469 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:43.469 "strip_size_kb": 0, 00:12:43.469 "state": "online", 00:12:43.469 "raid_level": "raid1", 00:12:43.469 "superblock": true, 00:12:43.469 "num_base_bdevs": 4, 00:12:43.469 "num_base_bdevs_discovered": 2, 00:12:43.469 "num_base_bdevs_operational": 2, 00:12:43.469 "base_bdevs_list": [ 00:12:43.469 { 00:12:43.469 "name": null, 00:12:43.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.469 "is_configured": false, 00:12:43.469 "data_offset": 0, 00:12:43.469 "data_size": 63488 00:12:43.469 }, 00:12:43.469 { 00:12:43.469 "name": null, 00:12:43.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.469 "is_configured": false, 00:12:43.469 "data_offset": 2048, 00:12:43.469 "data_size": 63488 00:12:43.469 }, 00:12:43.469 { 00:12:43.469 "name": "BaseBdev3", 00:12:43.469 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:43.469 "is_configured": true, 00:12:43.469 "data_offset": 2048, 00:12:43.469 "data_size": 63488 00:12:43.469 }, 00:12:43.469 { 00:12:43.469 "name": "BaseBdev4", 00:12:43.469 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:43.469 "is_configured": true, 00:12:43.469 "data_offset": 2048, 00:12:43.469 "data_size": 63488 00:12:43.469 } 00:12:43.469 ] 00:12:43.469 }' 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.469 23:08:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.038 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.039 "name": "raid_bdev1", 00:12:44.039 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:44.039 "strip_size_kb": 0, 00:12:44.039 "state": "online", 00:12:44.039 "raid_level": "raid1", 00:12:44.039 "superblock": true, 00:12:44.039 "num_base_bdevs": 4, 00:12:44.039 "num_base_bdevs_discovered": 2, 00:12:44.039 "num_base_bdevs_operational": 2, 00:12:44.039 "base_bdevs_list": [ 00:12:44.039 { 00:12:44.039 "name": null, 00:12:44.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.039 "is_configured": false, 00:12:44.039 "data_offset": 0, 00:12:44.039 "data_size": 63488 00:12:44.039 }, 00:12:44.039 { 00:12:44.039 "name": null, 00:12:44.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.039 "is_configured": false, 00:12:44.039 "data_offset": 2048, 00:12:44.039 "data_size": 63488 00:12:44.039 }, 00:12:44.039 { 00:12:44.039 "name": "BaseBdev3", 00:12:44.039 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:44.039 "is_configured": true, 00:12:44.039 "data_offset": 2048, 00:12:44.039 "data_size": 63488 00:12:44.039 }, 00:12:44.039 { 00:12:44.039 "name": "BaseBdev4", 00:12:44.039 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:44.039 "is_configured": true, 00:12:44.039 "data_offset": 2048, 00:12:44.039 "data_size": 63488 00:12:44.039 } 00:12:44.039 ] 00:12:44.039 }' 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.039 [2024-11-18 23:08:03.360330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:44.039 [2024-11-18 23:08:03.360433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.039 [2024-11-18 23:08:03.360471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:12:44.039 [2024-11-18 23:08:03.360482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.039 [2024-11-18 23:08:03.360906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.039 [2024-11-18 23:08:03.360926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.039 [2024-11-18 23:08:03.360998] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:44.039 [2024-11-18 23:08:03.361011] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:44.039 [2024-11-18 23:08:03.361020] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:44.039 [2024-11-18 23:08:03.361029] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:44.039 BaseBdev1 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.039 23:08:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.420 "name": "raid_bdev1", 00:12:45.420 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:45.420 "strip_size_kb": 0, 00:12:45.420 "state": "online", 00:12:45.420 "raid_level": "raid1", 00:12:45.420 "superblock": true, 00:12:45.420 "num_base_bdevs": 4, 00:12:45.420 "num_base_bdevs_discovered": 2, 00:12:45.420 "num_base_bdevs_operational": 2, 00:12:45.420 "base_bdevs_list": [ 00:12:45.420 { 00:12:45.420 "name": null, 00:12:45.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.420 "is_configured": false, 00:12:45.420 "data_offset": 0, 00:12:45.420 "data_size": 63488 00:12:45.420 }, 00:12:45.420 { 00:12:45.420 "name": null, 00:12:45.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.420 "is_configured": false, 00:12:45.420 "data_offset": 2048, 00:12:45.420 "data_size": 63488 00:12:45.420 }, 00:12:45.420 { 00:12:45.420 "name": "BaseBdev3", 00:12:45.420 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:45.420 "is_configured": true, 00:12:45.420 "data_offset": 2048, 00:12:45.420 "data_size": 63488 00:12:45.420 }, 00:12:45.420 { 00:12:45.420 "name": "BaseBdev4", 00:12:45.420 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:45.420 "is_configured": true, 00:12:45.420 "data_offset": 2048, 00:12:45.420 "data_size": 63488 00:12:45.420 } 00:12:45.420 ] 00:12:45.420 }' 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.420 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.688 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.688 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.688 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.688 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.688 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.688 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.689 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.689 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.689 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.689 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.689 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.689 "name": "raid_bdev1", 00:12:45.689 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:45.689 "strip_size_kb": 0, 00:12:45.689 "state": "online", 00:12:45.689 "raid_level": "raid1", 00:12:45.689 "superblock": true, 00:12:45.689 "num_base_bdevs": 4, 00:12:45.689 "num_base_bdevs_discovered": 2, 00:12:45.689 "num_base_bdevs_operational": 2, 00:12:45.689 "base_bdevs_list": [ 00:12:45.689 { 00:12:45.689 "name": null, 00:12:45.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.689 "is_configured": false, 00:12:45.689 "data_offset": 0, 00:12:45.689 "data_size": 63488 00:12:45.689 }, 00:12:45.689 { 00:12:45.689 "name": null, 00:12:45.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.689 "is_configured": false, 00:12:45.689 "data_offset": 2048, 00:12:45.689 "data_size": 63488 00:12:45.689 }, 00:12:45.689 { 00:12:45.689 "name": "BaseBdev3", 00:12:45.689 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:45.689 "is_configured": true, 00:12:45.689 "data_offset": 2048, 00:12:45.689 "data_size": 63488 00:12:45.689 }, 00:12:45.689 { 00:12:45.689 "name": "BaseBdev4", 00:12:45.689 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:45.689 "is_configured": true, 00:12:45.689 "data_offset": 2048, 00:12:45.689 "data_size": 63488 00:12:45.690 } 00:12:45.690 ] 00:12:45.690 }' 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.690 [2024-11-18 23:08:04.977586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.690 [2024-11-18 23:08:04.977773] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:45.690 [2024-11-18 23:08:04.977843] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:45.690 request: 00:12:45.690 { 00:12:45.690 "base_bdev": "BaseBdev1", 00:12:45.690 "raid_bdev": "raid_bdev1", 00:12:45.690 "method": "bdev_raid_add_base_bdev", 00:12:45.690 "req_id": 1 00:12:45.690 } 00:12:45.690 Got JSON-RPC error response 00:12:45.690 response: 00:12:45.690 { 00:12:45.690 "code": -22, 00:12:45.690 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:45.690 } 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:45.690 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:45.691 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:45.691 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:45.691 23:08:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:45.691 23:08:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.630 23:08:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.891 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.891 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.891 "name": "raid_bdev1", 00:12:46.891 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:46.891 "strip_size_kb": 0, 00:12:46.891 "state": "online", 00:12:46.891 "raid_level": "raid1", 00:12:46.891 "superblock": true, 00:12:46.891 "num_base_bdevs": 4, 00:12:46.891 "num_base_bdevs_discovered": 2, 00:12:46.891 "num_base_bdevs_operational": 2, 00:12:46.891 "base_bdevs_list": [ 00:12:46.891 { 00:12:46.891 "name": null, 00:12:46.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.891 "is_configured": false, 00:12:46.891 "data_offset": 0, 00:12:46.891 "data_size": 63488 00:12:46.891 }, 00:12:46.891 { 00:12:46.891 "name": null, 00:12:46.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.891 "is_configured": false, 00:12:46.891 "data_offset": 2048, 00:12:46.891 "data_size": 63488 00:12:46.891 }, 00:12:46.891 { 00:12:46.891 "name": "BaseBdev3", 00:12:46.891 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:46.891 "is_configured": true, 00:12:46.891 "data_offset": 2048, 00:12:46.891 "data_size": 63488 00:12:46.891 }, 00:12:46.891 { 00:12:46.891 "name": "BaseBdev4", 00:12:46.891 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:46.891 "is_configured": true, 00:12:46.891 "data_offset": 2048, 00:12:46.891 "data_size": 63488 00:12:46.891 } 00:12:46.891 ] 00:12:46.891 }' 00:12:46.891 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.891 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.151 "name": "raid_bdev1", 00:12:47.151 "uuid": "01a0ebd8-47d4-4795-a85b-538b3094cf01", 00:12:47.151 "strip_size_kb": 0, 00:12:47.151 "state": "online", 00:12:47.151 "raid_level": "raid1", 00:12:47.151 "superblock": true, 00:12:47.151 "num_base_bdevs": 4, 00:12:47.151 "num_base_bdevs_discovered": 2, 00:12:47.151 "num_base_bdevs_operational": 2, 00:12:47.151 "base_bdevs_list": [ 00:12:47.151 { 00:12:47.151 "name": null, 00:12:47.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.151 "is_configured": false, 00:12:47.151 "data_offset": 0, 00:12:47.151 "data_size": 63488 00:12:47.151 }, 00:12:47.151 { 00:12:47.151 "name": null, 00:12:47.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.151 "is_configured": false, 00:12:47.151 "data_offset": 2048, 00:12:47.151 "data_size": 63488 00:12:47.151 }, 00:12:47.151 { 00:12:47.151 "name": "BaseBdev3", 00:12:47.151 "uuid": "4ff1e30a-4659-5a41-a722-07cb443a4d33", 00:12:47.151 "is_configured": true, 00:12:47.151 "data_offset": 2048, 00:12:47.151 "data_size": 63488 00:12:47.151 }, 00:12:47.151 { 00:12:47.151 "name": "BaseBdev4", 00:12:47.151 "uuid": "2f91b33d-9036-5bb2-9874-0d9eaf3f547c", 00:12:47.151 "is_configured": true, 00:12:47.151 "data_offset": 2048, 00:12:47.151 "data_size": 63488 00:12:47.151 } 00:12:47.151 ] 00:12:47.151 }' 00:12:47.151 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88559 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88559 ']' 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88559 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88559 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:47.411 killing process with pid 88559 00:12:47.411 Received shutdown signal, test time was about 60.000000 seconds 00:12:47.411 00:12:47.411 Latency(us) 00:12:47.411 [2024-11-18T23:08:06.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.411 [2024-11-18T23:08:06.789Z] =================================================================================================================== 00:12:47.411 [2024-11-18T23:08:06.789Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88559' 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88559 00:12:47.411 [2024-11-18 23:08:06.628837] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:47.411 [2024-11-18 23:08:06.628941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.411 [2024-11-18 23:08:06.629001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.411 [2024-11-18 23:08:06.629012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:47.411 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88559 00:12:47.411 [2024-11-18 23:08:06.678688] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.672 23:08:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:47.672 00:12:47.672 real 0m23.518s 00:12:47.672 user 0m28.313s 00:12:47.672 sys 0m3.932s 00:12:47.672 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.672 ************************************ 00:12:47.672 END TEST raid_rebuild_test_sb 00:12:47.672 ************************************ 00:12:47.672 23:08:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.672 23:08:06 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:47.672 23:08:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:47.672 23:08:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.672 23:08:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:47.672 ************************************ 00:12:47.672 START TEST raid_rebuild_test_io 00:12:47.672 ************************************ 00:12:47.672 23:08:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:12:47.672 23:08:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89297 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89297 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89297 ']' 00:12:47.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:47.672 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.932 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:47.932 Zero copy mechanism will not be used. 00:12:47.932 [2024-11-18 23:08:07.097322] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:47.932 [2024-11-18 23:08:07.097453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89297 ] 00:12:47.932 [2024-11-18 23:08:07.256910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.932 [2024-11-18 23:08:07.302427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.192 [2024-11-18 23:08:07.345875] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.192 [2024-11-18 23:08:07.345906] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.763 BaseBdev1_malloc 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.763 [2024-11-18 23:08:07.932624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:48.763 [2024-11-18 23:08:07.932734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.763 [2024-11-18 23:08:07.932779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:48.763 [2024-11-18 23:08:07.932818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.763 [2024-11-18 23:08:07.934921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.763 [2024-11-18 23:08:07.935012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:48.763 BaseBdev1 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.763 BaseBdev2_malloc 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.763 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:48.764 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.764 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.764 [2024-11-18 23:08:07.970892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:48.764 [2024-11-18 23:08:07.971000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.764 [2024-11-18 23:08:07.971028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:48.764 [2024-11-18 23:08:07.971039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.764 [2024-11-18 23:08:07.973517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.764 [2024-11-18 23:08:07.973557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:48.764 BaseBdev2 00:12:48.764 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.764 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.764 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:48.764 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.764 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.764 BaseBdev3_malloc 00:12:48.764 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.764 23:08:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:48.764 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.764 23:08:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.764 [2024-11-18 23:08:07.999552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:48.764 [2024-11-18 23:08:07.999656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.764 [2024-11-18 23:08:07.999698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:48.764 [2024-11-18 23:08:07.999729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.764 [2024-11-18 23:08:08.001748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.764 [2024-11-18 23:08:08.001814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:48.764 BaseBdev3 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.764 BaseBdev4_malloc 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.764 [2024-11-18 23:08:08.028158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:48.764 [2024-11-18 23:08:08.028266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.764 [2024-11-18 23:08:08.028319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:48.764 [2024-11-18 23:08:08.028353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.764 [2024-11-18 23:08:08.030361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.764 [2024-11-18 23:08:08.030425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:48.764 BaseBdev4 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.764 spare_malloc 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.764 spare_delay 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.764 [2024-11-18 23:08:08.068821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:48.764 [2024-11-18 23:08:08.068926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.764 [2024-11-18 23:08:08.068966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:48.764 [2024-11-18 23:08:08.068995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.764 [2024-11-18 23:08:08.071057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.764 [2024-11-18 23:08:08.071125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:48.764 spare 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.764 [2024-11-18 23:08:08.080890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.764 [2024-11-18 23:08:08.082719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.764 [2024-11-18 23:08:08.082842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.764 [2024-11-18 23:08:08.082906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:48.764 [2024-11-18 23:08:08.083038] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:48.764 [2024-11-18 23:08:08.083081] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:48.764 [2024-11-18 23:08:08.083359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:48.764 [2024-11-18 23:08:08.083549] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:48.764 [2024-11-18 23:08:08.083598] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:48.764 [2024-11-18 23:08:08.083749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.764 "name": "raid_bdev1", 00:12:48.764 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:48.764 "strip_size_kb": 0, 00:12:48.764 "state": "online", 00:12:48.764 "raid_level": "raid1", 00:12:48.764 "superblock": false, 00:12:48.764 "num_base_bdevs": 4, 00:12:48.764 "num_base_bdevs_discovered": 4, 00:12:48.764 "num_base_bdevs_operational": 4, 00:12:48.764 "base_bdevs_list": [ 00:12:48.764 { 00:12:48.764 "name": "BaseBdev1", 00:12:48.764 "uuid": "e8e79b00-49b8-5b70-b746-be9f6eefa8da", 00:12:48.764 "is_configured": true, 00:12:48.764 "data_offset": 0, 00:12:48.764 "data_size": 65536 00:12:48.764 }, 00:12:48.764 { 00:12:48.764 "name": "BaseBdev2", 00:12:48.764 "uuid": "884fd40e-48ae-506c-985d-19deecd06569", 00:12:48.764 "is_configured": true, 00:12:48.764 "data_offset": 0, 00:12:48.764 "data_size": 65536 00:12:48.764 }, 00:12:48.764 { 00:12:48.764 "name": "BaseBdev3", 00:12:48.764 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:48.764 "is_configured": true, 00:12:48.764 "data_offset": 0, 00:12:48.764 "data_size": 65536 00:12:48.764 }, 00:12:48.764 { 00:12:48.764 "name": "BaseBdev4", 00:12:48.764 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:48.764 "is_configured": true, 00:12:48.764 "data_offset": 0, 00:12:48.764 "data_size": 65536 00:12:48.764 } 00:12:48.764 ] 00:12:48.764 }' 00:12:48.764 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.024 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.284 [2024-11-18 23:08:08.544351] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.284 [2024-11-18 23:08:08.623928] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.284 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.544 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.544 "name": "raid_bdev1", 00:12:49.544 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:49.544 "strip_size_kb": 0, 00:12:49.544 "state": "online", 00:12:49.544 "raid_level": "raid1", 00:12:49.544 "superblock": false, 00:12:49.544 "num_base_bdevs": 4, 00:12:49.544 "num_base_bdevs_discovered": 3, 00:12:49.544 "num_base_bdevs_operational": 3, 00:12:49.544 "base_bdevs_list": [ 00:12:49.544 { 00:12:49.544 "name": null, 00:12:49.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.544 "is_configured": false, 00:12:49.544 "data_offset": 0, 00:12:49.544 "data_size": 65536 00:12:49.544 }, 00:12:49.544 { 00:12:49.544 "name": "BaseBdev2", 00:12:49.544 "uuid": "884fd40e-48ae-506c-985d-19deecd06569", 00:12:49.544 "is_configured": true, 00:12:49.544 "data_offset": 0, 00:12:49.544 "data_size": 65536 00:12:49.544 }, 00:12:49.544 { 00:12:49.544 "name": "BaseBdev3", 00:12:49.544 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:49.544 "is_configured": true, 00:12:49.544 "data_offset": 0, 00:12:49.544 "data_size": 65536 00:12:49.544 }, 00:12:49.544 { 00:12:49.544 "name": "BaseBdev4", 00:12:49.544 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:49.544 "is_configured": true, 00:12:49.544 "data_offset": 0, 00:12:49.544 "data_size": 65536 00:12:49.544 } 00:12:49.544 ] 00:12:49.544 }' 00:12:49.544 23:08:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.544 23:08:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.544 [2024-11-18 23:08:08.713682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:49.544 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:49.544 Zero copy mechanism will not be used. 00:12:49.544 Running I/O for 60 seconds... 00:12:49.804 23:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:49.804 23:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.804 23:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.804 [2024-11-18 23:08:09.073680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.804 23:08:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.804 23:08:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:49.804 [2024-11-18 23:08:09.132068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:49.804 [2024-11-18 23:08:09.134092] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.065 [2024-11-18 23:08:09.254581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:50.065 [2024-11-18 23:08:09.255857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:50.323 [2024-11-18 23:08:09.474598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:50.323 [2024-11-18 23:08:09.474901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:50.583 204.00 IOPS, 612.00 MiB/s [2024-11-18T23:08:09.961Z] [2024-11-18 23:08:09.809516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:50.583 [2024-11-18 23:08:09.815022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:50.844 [2024-11-18 23:08:10.019672] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.844 "name": "raid_bdev1", 00:12:50.844 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:50.844 "strip_size_kb": 0, 00:12:50.844 "state": "online", 00:12:50.844 "raid_level": "raid1", 00:12:50.844 "superblock": false, 00:12:50.844 "num_base_bdevs": 4, 00:12:50.844 "num_base_bdevs_discovered": 4, 00:12:50.844 "num_base_bdevs_operational": 4, 00:12:50.844 "process": { 00:12:50.844 "type": "rebuild", 00:12:50.844 "target": "spare", 00:12:50.844 "progress": { 00:12:50.844 "blocks": 10240, 00:12:50.844 "percent": 15 00:12:50.844 } 00:12:50.844 }, 00:12:50.844 "base_bdevs_list": [ 00:12:50.844 { 00:12:50.844 "name": "spare", 00:12:50.844 "uuid": "005585c8-fb83-56fa-a7db-9d2f1b0ba451", 00:12:50.844 "is_configured": true, 00:12:50.844 "data_offset": 0, 00:12:50.844 "data_size": 65536 00:12:50.844 }, 00:12:50.844 { 00:12:50.844 "name": "BaseBdev2", 00:12:50.844 "uuid": "884fd40e-48ae-506c-985d-19deecd06569", 00:12:50.844 "is_configured": true, 00:12:50.844 "data_offset": 0, 00:12:50.844 "data_size": 65536 00:12:50.844 }, 00:12:50.844 { 00:12:50.844 "name": "BaseBdev3", 00:12:50.844 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:50.844 "is_configured": true, 00:12:50.844 "data_offset": 0, 00:12:50.844 "data_size": 65536 00:12:50.844 }, 00:12:50.844 { 00:12:50.844 "name": "BaseBdev4", 00:12:50.844 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:50.844 "is_configured": true, 00:12:50.844 "data_offset": 0, 00:12:50.844 "data_size": 65536 00:12:50.844 } 00:12:50.844 ] 00:12:50.844 }' 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.844 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.104 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.104 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:51.104 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.104 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.104 [2024-11-18 23:08:10.270077] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.104 [2024-11-18 23:08:10.369776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:51.104 [2024-11-18 23:08:10.469581] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:51.364 [2024-11-18 23:08:10.484056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.364 [2024-11-18 23:08:10.484153] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.364 [2024-11-18 23:08:10.484182] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:51.364 [2024-11-18 23:08:10.506254] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.365 "name": "raid_bdev1", 00:12:51.365 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:51.365 "strip_size_kb": 0, 00:12:51.365 "state": "online", 00:12:51.365 "raid_level": "raid1", 00:12:51.365 "superblock": false, 00:12:51.365 "num_base_bdevs": 4, 00:12:51.365 "num_base_bdevs_discovered": 3, 00:12:51.365 "num_base_bdevs_operational": 3, 00:12:51.365 "base_bdevs_list": [ 00:12:51.365 { 00:12:51.365 "name": null, 00:12:51.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.365 "is_configured": false, 00:12:51.365 "data_offset": 0, 00:12:51.365 "data_size": 65536 00:12:51.365 }, 00:12:51.365 { 00:12:51.365 "name": "BaseBdev2", 00:12:51.365 "uuid": "884fd40e-48ae-506c-985d-19deecd06569", 00:12:51.365 "is_configured": true, 00:12:51.365 "data_offset": 0, 00:12:51.365 "data_size": 65536 00:12:51.365 }, 00:12:51.365 { 00:12:51.365 "name": "BaseBdev3", 00:12:51.365 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:51.365 "is_configured": true, 00:12:51.365 "data_offset": 0, 00:12:51.365 "data_size": 65536 00:12:51.365 }, 00:12:51.365 { 00:12:51.365 "name": "BaseBdev4", 00:12:51.365 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:51.365 "is_configured": true, 00:12:51.365 "data_offset": 0, 00:12:51.365 "data_size": 65536 00:12:51.365 } 00:12:51.365 ] 00:12:51.365 }' 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.365 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.625 177.00 IOPS, 531.00 MiB/s [2024-11-18T23:08:11.003Z] 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.625 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.625 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.625 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.625 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.625 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.625 23:08:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.625 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.625 23:08:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.886 23:08:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.886 23:08:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.886 "name": "raid_bdev1", 00:12:51.886 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:51.886 "strip_size_kb": 0, 00:12:51.886 "state": "online", 00:12:51.886 "raid_level": "raid1", 00:12:51.886 "superblock": false, 00:12:51.886 "num_base_bdevs": 4, 00:12:51.886 "num_base_bdevs_discovered": 3, 00:12:51.886 "num_base_bdevs_operational": 3, 00:12:51.886 "base_bdevs_list": [ 00:12:51.886 { 00:12:51.886 "name": null, 00:12:51.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.886 "is_configured": false, 00:12:51.886 "data_offset": 0, 00:12:51.886 "data_size": 65536 00:12:51.886 }, 00:12:51.886 { 00:12:51.886 "name": "BaseBdev2", 00:12:51.886 "uuid": "884fd40e-48ae-506c-985d-19deecd06569", 00:12:51.886 "is_configured": true, 00:12:51.886 "data_offset": 0, 00:12:51.886 "data_size": 65536 00:12:51.886 }, 00:12:51.886 { 00:12:51.886 "name": "BaseBdev3", 00:12:51.886 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:51.886 "is_configured": true, 00:12:51.886 "data_offset": 0, 00:12:51.886 "data_size": 65536 00:12:51.886 }, 00:12:51.886 { 00:12:51.886 "name": "BaseBdev4", 00:12:51.886 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:51.886 "is_configured": true, 00:12:51.886 "data_offset": 0, 00:12:51.886 "data_size": 65536 00:12:51.886 } 00:12:51.886 ] 00:12:51.886 }' 00:12:51.886 23:08:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.886 23:08:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.886 23:08:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.886 23:08:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.886 23:08:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.886 23:08:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.886 23:08:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.886 [2024-11-18 23:08:11.145391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.886 23:08:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.886 23:08:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:51.886 [2024-11-18 23:08:11.180236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:51.887 [2024-11-18 23:08:11.182231] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:52.147 [2024-11-18 23:08:11.284502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:52.147 [2024-11-18 23:08:11.285031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:52.147 [2024-11-18 23:08:11.494934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:52.147 [2024-11-18 23:08:11.495607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:52.681 182.00 IOPS, 546.00 MiB/s [2024-11-18T23:08:12.059Z] [2024-11-18 23:08:11.928047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:52.681 [2024-11-18 23:08:11.928650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.941 "name": "raid_bdev1", 00:12:52.941 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:52.941 "strip_size_kb": 0, 00:12:52.941 "state": "online", 00:12:52.941 "raid_level": "raid1", 00:12:52.941 "superblock": false, 00:12:52.941 "num_base_bdevs": 4, 00:12:52.941 "num_base_bdevs_discovered": 4, 00:12:52.941 "num_base_bdevs_operational": 4, 00:12:52.941 "process": { 00:12:52.941 "type": "rebuild", 00:12:52.941 "target": "spare", 00:12:52.941 "progress": { 00:12:52.941 "blocks": 12288, 00:12:52.941 "percent": 18 00:12:52.941 } 00:12:52.941 }, 00:12:52.941 "base_bdevs_list": [ 00:12:52.941 { 00:12:52.941 "name": "spare", 00:12:52.941 "uuid": "005585c8-fb83-56fa-a7db-9d2f1b0ba451", 00:12:52.941 "is_configured": true, 00:12:52.941 "data_offset": 0, 00:12:52.941 "data_size": 65536 00:12:52.941 }, 00:12:52.941 { 00:12:52.941 "name": "BaseBdev2", 00:12:52.941 "uuid": "884fd40e-48ae-506c-985d-19deecd06569", 00:12:52.941 "is_configured": true, 00:12:52.941 "data_offset": 0, 00:12:52.941 "data_size": 65536 00:12:52.941 }, 00:12:52.941 { 00:12:52.941 "name": "BaseBdev3", 00:12:52.941 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:52.941 "is_configured": true, 00:12:52.941 "data_offset": 0, 00:12:52.941 "data_size": 65536 00:12:52.941 }, 00:12:52.941 { 00:12:52.941 "name": "BaseBdev4", 00:12:52.941 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:52.941 "is_configured": true, 00:12:52.941 "data_offset": 0, 00:12:52.941 "data_size": 65536 00:12:52.941 } 00:12:52.941 ] 00:12:52.941 }' 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.941 [2024-11-18 23:08:12.261132] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.941 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.210 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.210 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:53.210 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:53.210 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:53.210 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.211 [2024-11-18 23:08:12.329409] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:53.211 [2024-11-18 23:08:12.402648] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:12:53.211 [2024-11-18 23:08:12.402737] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.211 [2024-11-18 23:08:12.429099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.211 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.211 "name": "raid_bdev1", 00:12:53.211 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:53.211 "strip_size_kb": 0, 00:12:53.211 "state": "online", 00:12:53.211 "raid_level": "raid1", 00:12:53.212 "superblock": false, 00:12:53.212 "num_base_bdevs": 4, 00:12:53.212 "num_base_bdevs_discovered": 3, 00:12:53.212 "num_base_bdevs_operational": 3, 00:12:53.212 "process": { 00:12:53.212 "type": "rebuild", 00:12:53.212 "target": "spare", 00:12:53.212 "progress": { 00:12:53.212 "blocks": 16384, 00:12:53.212 "percent": 25 00:12:53.212 } 00:12:53.212 }, 00:12:53.212 "base_bdevs_list": [ 00:12:53.212 { 00:12:53.212 "name": "spare", 00:12:53.212 "uuid": "005585c8-fb83-56fa-a7db-9d2f1b0ba451", 00:12:53.212 "is_configured": true, 00:12:53.212 "data_offset": 0, 00:12:53.212 "data_size": 65536 00:12:53.212 }, 00:12:53.212 { 00:12:53.212 "name": null, 00:12:53.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.212 "is_configured": false, 00:12:53.212 "data_offset": 0, 00:12:53.212 "data_size": 65536 00:12:53.212 }, 00:12:53.212 { 00:12:53.212 "name": "BaseBdev3", 00:12:53.212 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:53.212 "is_configured": true, 00:12:53.212 "data_offset": 0, 00:12:53.212 "data_size": 65536 00:12:53.212 }, 00:12:53.212 { 00:12:53.212 "name": "BaseBdev4", 00:12:53.212 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:53.212 "is_configured": true, 00:12:53.212 "data_offset": 0, 00:12:53.212 "data_size": 65536 00:12:53.212 } 00:12:53.212 ] 00:12:53.212 }' 00:12:53.212 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.212 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.212 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.212 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.212 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=388 00:12:53.212 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.212 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.212 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.213 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.213 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.213 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.213 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.213 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.213 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.213 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.479 23:08:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.479 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.479 "name": "raid_bdev1", 00:12:53.479 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:53.479 "strip_size_kb": 0, 00:12:53.479 "state": "online", 00:12:53.479 "raid_level": "raid1", 00:12:53.479 "superblock": false, 00:12:53.479 "num_base_bdevs": 4, 00:12:53.479 "num_base_bdevs_discovered": 3, 00:12:53.479 "num_base_bdevs_operational": 3, 00:12:53.479 "process": { 00:12:53.479 "type": "rebuild", 00:12:53.479 "target": "spare", 00:12:53.479 "progress": { 00:12:53.479 "blocks": 16384, 00:12:53.479 "percent": 25 00:12:53.479 } 00:12:53.479 }, 00:12:53.479 "base_bdevs_list": [ 00:12:53.479 { 00:12:53.479 "name": "spare", 00:12:53.479 "uuid": "005585c8-fb83-56fa-a7db-9d2f1b0ba451", 00:12:53.479 "is_configured": true, 00:12:53.479 "data_offset": 0, 00:12:53.479 "data_size": 65536 00:12:53.479 }, 00:12:53.479 { 00:12:53.479 "name": null, 00:12:53.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.479 "is_configured": false, 00:12:53.479 "data_offset": 0, 00:12:53.479 "data_size": 65536 00:12:53.479 }, 00:12:53.479 { 00:12:53.479 "name": "BaseBdev3", 00:12:53.479 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:53.479 "is_configured": true, 00:12:53.479 "data_offset": 0, 00:12:53.479 "data_size": 65536 00:12:53.479 }, 00:12:53.479 { 00:12:53.480 "name": "BaseBdev4", 00:12:53.480 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:53.480 "is_configured": true, 00:12:53.480 "data_offset": 0, 00:12:53.480 "data_size": 65536 00:12:53.480 } 00:12:53.480 ] 00:12:53.480 }' 00:12:53.480 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.480 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.480 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.480 153.50 IOPS, 460.50 MiB/s [2024-11-18T23:08:12.858Z] 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.480 23:08:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.739 [2024-11-18 23:08:12.920057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:54.000 [2024-11-18 23:08:13.273025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:54.260 [2024-11-18 23:08:13.614384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:54.528 135.00 IOPS, 405.00 MiB/s [2024-11-18T23:08:13.906Z] 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.528 "name": "raid_bdev1", 00:12:54.528 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:54.528 "strip_size_kb": 0, 00:12:54.528 "state": "online", 00:12:54.528 "raid_level": "raid1", 00:12:54.528 "superblock": false, 00:12:54.528 "num_base_bdevs": 4, 00:12:54.528 "num_base_bdevs_discovered": 3, 00:12:54.528 "num_base_bdevs_operational": 3, 00:12:54.528 "process": { 00:12:54.528 "type": "rebuild", 00:12:54.528 "target": "spare", 00:12:54.528 "progress": { 00:12:54.528 "blocks": 32768, 00:12:54.528 "percent": 50 00:12:54.528 } 00:12:54.528 }, 00:12:54.528 "base_bdevs_list": [ 00:12:54.528 { 00:12:54.528 "name": "spare", 00:12:54.528 "uuid": "005585c8-fb83-56fa-a7db-9d2f1b0ba451", 00:12:54.528 "is_configured": true, 00:12:54.528 "data_offset": 0, 00:12:54.528 "data_size": 65536 00:12:54.528 }, 00:12:54.528 { 00:12:54.528 "name": null, 00:12:54.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.528 "is_configured": false, 00:12:54.528 "data_offset": 0, 00:12:54.528 "data_size": 65536 00:12:54.528 }, 00:12:54.528 { 00:12:54.528 "name": "BaseBdev3", 00:12:54.528 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:54.528 "is_configured": true, 00:12:54.528 "data_offset": 0, 00:12:54.528 "data_size": 65536 00:12:54.528 }, 00:12:54.528 { 00:12:54.528 "name": "BaseBdev4", 00:12:54.528 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:54.528 "is_configured": true, 00:12:54.528 "data_offset": 0, 00:12:54.528 "data_size": 65536 00:12:54.528 } 00:12:54.528 ] 00:12:54.528 }' 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.528 23:08:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.098 [2024-11-18 23:08:14.166162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:55.621 119.83 IOPS, 359.50 MiB/s [2024-11-18T23:08:14.999Z] 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.621 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.621 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.621 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.621 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.621 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.621 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.621 23:08:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.621 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.621 23:08:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.621 23:08:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.621 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.621 "name": "raid_bdev1", 00:12:55.621 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:55.621 "strip_size_kb": 0, 00:12:55.621 "state": "online", 00:12:55.621 "raid_level": "raid1", 00:12:55.621 "superblock": false, 00:12:55.621 "num_base_bdevs": 4, 00:12:55.621 "num_base_bdevs_discovered": 3, 00:12:55.621 "num_base_bdevs_operational": 3, 00:12:55.621 "process": { 00:12:55.621 "type": "rebuild", 00:12:55.621 "target": "spare", 00:12:55.621 "progress": { 00:12:55.621 "blocks": 53248, 00:12:55.621 "percent": 81 00:12:55.621 } 00:12:55.621 }, 00:12:55.621 "base_bdevs_list": [ 00:12:55.621 { 00:12:55.621 "name": "spare", 00:12:55.621 "uuid": "005585c8-fb83-56fa-a7db-9d2f1b0ba451", 00:12:55.621 "is_configured": true, 00:12:55.621 "data_offset": 0, 00:12:55.621 "data_size": 65536 00:12:55.621 }, 00:12:55.621 { 00:12:55.621 "name": null, 00:12:55.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.621 "is_configured": false, 00:12:55.621 "data_offset": 0, 00:12:55.621 "data_size": 65536 00:12:55.621 }, 00:12:55.621 { 00:12:55.621 "name": "BaseBdev3", 00:12:55.621 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:55.621 "is_configured": true, 00:12:55.621 "data_offset": 0, 00:12:55.621 "data_size": 65536 00:12:55.621 }, 00:12:55.621 { 00:12:55.621 "name": "BaseBdev4", 00:12:55.621 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:55.621 "is_configured": true, 00:12:55.621 "data_offset": 0, 00:12:55.621 "data_size": 65536 00:12:55.621 } 00:12:55.621 ] 00:12:55.621 }' 00:12:55.622 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.622 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.622 23:08:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.953 23:08:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.953 23:08:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.212 [2024-11-18 23:08:15.485083] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:56.472 [2024-11-18 23:08:15.590241] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:56.472 [2024-11-18 23:08:15.593528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.733 107.14 IOPS, 321.43 MiB/s [2024-11-18T23:08:16.111Z] 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.733 "name": "raid_bdev1", 00:12:56.733 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:56.733 "strip_size_kb": 0, 00:12:56.733 "state": "online", 00:12:56.733 "raid_level": "raid1", 00:12:56.733 "superblock": false, 00:12:56.733 "num_base_bdevs": 4, 00:12:56.733 "num_base_bdevs_discovered": 3, 00:12:56.733 "num_base_bdevs_operational": 3, 00:12:56.733 "base_bdevs_list": [ 00:12:56.733 { 00:12:56.733 "name": "spare", 00:12:56.733 "uuid": "005585c8-fb83-56fa-a7db-9d2f1b0ba451", 00:12:56.733 "is_configured": true, 00:12:56.733 "data_offset": 0, 00:12:56.733 "data_size": 65536 00:12:56.733 }, 00:12:56.733 { 00:12:56.733 "name": null, 00:12:56.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.733 "is_configured": false, 00:12:56.733 "data_offset": 0, 00:12:56.733 "data_size": 65536 00:12:56.733 }, 00:12:56.733 { 00:12:56.733 "name": "BaseBdev3", 00:12:56.733 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:56.733 "is_configured": true, 00:12:56.733 "data_offset": 0, 00:12:56.733 "data_size": 65536 00:12:56.733 }, 00:12:56.733 { 00:12:56.733 "name": "BaseBdev4", 00:12:56.733 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:56.733 "is_configured": true, 00:12:56.733 "data_offset": 0, 00:12:56.733 "data_size": 65536 00:12:56.733 } 00:12:56.733 ] 00:12:56.733 }' 00:12:56.733 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.994 "name": "raid_bdev1", 00:12:56.994 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:56.994 "strip_size_kb": 0, 00:12:56.994 "state": "online", 00:12:56.994 "raid_level": "raid1", 00:12:56.994 "superblock": false, 00:12:56.994 "num_base_bdevs": 4, 00:12:56.994 "num_base_bdevs_discovered": 3, 00:12:56.994 "num_base_bdevs_operational": 3, 00:12:56.994 "base_bdevs_list": [ 00:12:56.994 { 00:12:56.994 "name": "spare", 00:12:56.994 "uuid": "005585c8-fb83-56fa-a7db-9d2f1b0ba451", 00:12:56.994 "is_configured": true, 00:12:56.994 "data_offset": 0, 00:12:56.994 "data_size": 65536 00:12:56.994 }, 00:12:56.994 { 00:12:56.994 "name": null, 00:12:56.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.994 "is_configured": false, 00:12:56.994 "data_offset": 0, 00:12:56.994 "data_size": 65536 00:12:56.994 }, 00:12:56.994 { 00:12:56.994 "name": "BaseBdev3", 00:12:56.994 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:56.994 "is_configured": true, 00:12:56.994 "data_offset": 0, 00:12:56.994 "data_size": 65536 00:12:56.994 }, 00:12:56.994 { 00:12:56.994 "name": "BaseBdev4", 00:12:56.994 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:56.994 "is_configured": true, 00:12:56.994 "data_offset": 0, 00:12:56.994 "data_size": 65536 00:12:56.994 } 00:12:56.994 ] 00:12:56.994 }' 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.994 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.255 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.255 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.255 "name": "raid_bdev1", 00:12:57.255 "uuid": "19f4f794-bb16-4fe1-af0a-d8ae10526dc4", 00:12:57.255 "strip_size_kb": 0, 00:12:57.255 "state": "online", 00:12:57.255 "raid_level": "raid1", 00:12:57.255 "superblock": false, 00:12:57.255 "num_base_bdevs": 4, 00:12:57.255 "num_base_bdevs_discovered": 3, 00:12:57.255 "num_base_bdevs_operational": 3, 00:12:57.255 "base_bdevs_list": [ 00:12:57.255 { 00:12:57.255 "name": "spare", 00:12:57.255 "uuid": "005585c8-fb83-56fa-a7db-9d2f1b0ba451", 00:12:57.255 "is_configured": true, 00:12:57.255 "data_offset": 0, 00:12:57.255 "data_size": 65536 00:12:57.255 }, 00:12:57.255 { 00:12:57.255 "name": null, 00:12:57.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.255 "is_configured": false, 00:12:57.255 "data_offset": 0, 00:12:57.255 "data_size": 65536 00:12:57.255 }, 00:12:57.255 { 00:12:57.255 "name": "BaseBdev3", 00:12:57.255 "uuid": "412a7908-9c1b-5e57-b673-0690a7070791", 00:12:57.255 "is_configured": true, 00:12:57.255 "data_offset": 0, 00:12:57.255 "data_size": 65536 00:12:57.255 }, 00:12:57.255 { 00:12:57.255 "name": "BaseBdev4", 00:12:57.255 "uuid": "fb02e92d-7867-56e2-8bfc-495bf9ecc598", 00:12:57.255 "is_configured": true, 00:12:57.255 "data_offset": 0, 00:12:57.255 "data_size": 65536 00:12:57.255 } 00:12:57.255 ] 00:12:57.255 }' 00:12:57.255 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.255 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.515 98.62 IOPS, 295.88 MiB/s [2024-11-18T23:08:16.893Z] 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:57.515 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.515 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.515 [2024-11-18 23:08:16.795329] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.515 [2024-11-18 23:08:16.795398] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.515 00:12:57.515 Latency(us) 00:12:57.515 [2024-11-18T23:08:16.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.515 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:57.515 raid_bdev1 : 8.18 97.81 293.43 0.00 0.00 15217.18 279.03 114473.36 00:12:57.515 [2024-11-18T23:08:16.893Z] =================================================================================================================== 00:12:57.515 [2024-11-18T23:08:16.893Z] Total : 97.81 293.43 0.00 0.00 15217.18 279.03 114473.36 00:12:57.515 [2024-11-18 23:08:16.882113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.515 [2024-11-18 23:08:16.882183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.515 [2024-11-18 23:08:16.882324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.515 [2024-11-18 23:08:16.882374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:57.515 { 00:12:57.515 "results": [ 00:12:57.515 { 00:12:57.515 "job": "raid_bdev1", 00:12:57.515 "core_mask": "0x1", 00:12:57.515 "workload": "randrw", 00:12:57.515 "percentage": 50, 00:12:57.515 "status": "finished", 00:12:57.515 "queue_depth": 2, 00:12:57.515 "io_size": 3145728, 00:12:57.515 "runtime": 8.179035, 00:12:57.515 "iops": 97.81104983656385, 00:12:57.515 "mibps": 293.4331495096916, 00:12:57.515 "io_failed": 0, 00:12:57.515 "io_timeout": 0, 00:12:57.515 "avg_latency_us": 15217.175475982534, 00:12:57.515 "min_latency_us": 279.0288209606987, 00:12:57.515 "max_latency_us": 114473.36244541485 00:12:57.515 } 00:12:57.515 ], 00:12:57.515 "core_count": 1 00:12:57.515 } 00:12:57.515 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.515 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.515 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.515 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:57.515 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.775 23:08:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:57.775 /dev/nbd0 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.036 1+0 records in 00:12:58.036 1+0 records out 00:12:58.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416982 s, 9.8 MB/s 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:58.036 /dev/nbd1 00:12:58.036 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.303 1+0 records in 00:12:58.303 1+0 records out 00:12:58.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334632 s, 12.2 MB/s 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.303 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.567 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:58.567 /dev/nbd1 00:12:58.827 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.828 1+0 records in 00:12:58.828 1+0 records out 00:12:58.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420817 s, 9.7 MB/s 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.828 23:08:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:58.828 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:58.828 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.828 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:58.828 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.828 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:58.828 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.828 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:59.092 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:59.092 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:59.092 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:59.092 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.092 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.092 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:59.093 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:59.093 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.093 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.093 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.093 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.093 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.093 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:59.093 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.093 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89297 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89297 ']' 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89297 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89297 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.354 killing process with pid 89297 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89297' 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89297 00:12:59.354 Received shutdown signal, test time was about 9.823436 seconds 00:12:59.354 00:12:59.354 Latency(us) 00:12:59.354 [2024-11-18T23:08:18.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.354 [2024-11-18T23:08:18.732Z] =================================================================================================================== 00:12:59.354 [2024-11-18T23:08:18.732Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:59.354 [2024-11-18 23:08:18.520312] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.354 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89297 00:12:59.354 [2024-11-18 23:08:18.564761] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:59.617 00:12:59.617 real 0m11.801s 00:12:59.617 user 0m15.318s 00:12:59.617 sys 0m1.820s 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.617 ************************************ 00:12:59.617 END TEST raid_rebuild_test_io 00:12:59.617 ************************************ 00:12:59.617 23:08:18 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:59.617 23:08:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:59.617 23:08:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.617 23:08:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.617 ************************************ 00:12:59.617 START TEST raid_rebuild_test_sb_io 00:12:59.617 ************************************ 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89696 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89696 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89696 ']' 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.617 23:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.617 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:59.617 Zero copy mechanism will not be used. 00:12:59.617 [2024-11-18 23:08:18.986385] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:59.617 [2024-11-18 23:08:18.986508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89696 ] 00:12:59.878 [2024-11-18 23:08:19.152345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.878 [2024-11-18 23:08:19.199745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.878 [2024-11-18 23:08:19.242728] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.878 [2024-11-18 23:08:19.242767] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.446 BaseBdev1_malloc 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.446 [2024-11-18 23:08:19.805601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:00.446 [2024-11-18 23:08:19.805666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.446 [2024-11-18 23:08:19.805692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:00.446 [2024-11-18 23:08:19.805707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.446 [2024-11-18 23:08:19.807828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.446 [2024-11-18 23:08:19.807861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:00.446 BaseBdev1 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.446 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.447 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:00.447 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.447 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.707 BaseBdev2_malloc 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.707 [2024-11-18 23:08:19.841863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:00.707 [2024-11-18 23:08:19.841917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.707 [2024-11-18 23:08:19.841940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:00.707 [2024-11-18 23:08:19.841950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.707 [2024-11-18 23:08:19.844330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.707 [2024-11-18 23:08:19.844360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:00.707 BaseBdev2 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.707 BaseBdev3_malloc 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.707 [2024-11-18 23:08:19.870519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:00.707 [2024-11-18 23:08:19.870560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.707 [2024-11-18 23:08:19.870583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:00.707 [2024-11-18 23:08:19.870591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.707 [2024-11-18 23:08:19.872583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.707 [2024-11-18 23:08:19.872613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:00.707 BaseBdev3 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.707 BaseBdev4_malloc 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.707 [2024-11-18 23:08:19.899143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:00.707 [2024-11-18 23:08:19.899192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.707 [2024-11-18 23:08:19.899216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:00.707 [2024-11-18 23:08:19.899224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.707 [2024-11-18 23:08:19.901251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.707 [2024-11-18 23:08:19.901293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:00.707 BaseBdev4 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.707 spare_malloc 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.707 spare_delay 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.707 [2024-11-18 23:08:19.939908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:00.707 [2024-11-18 23:08:19.939954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.707 [2024-11-18 23:08:19.939974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:00.707 [2024-11-18 23:08:19.939983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.707 [2024-11-18 23:08:19.942023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.707 [2024-11-18 23:08:19.942053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:00.707 spare 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.707 [2024-11-18 23:08:19.951966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.707 [2024-11-18 23:08:19.953749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.707 [2024-11-18 23:08:19.953820] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.707 [2024-11-18 23:08:19.953861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:00.707 [2024-11-18 23:08:19.954035] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:00.707 [2024-11-18 23:08:19.954056] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.707 [2024-11-18 23:08:19.954333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:00.707 [2024-11-18 23:08:19.954488] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:00.707 [2024-11-18 23:08:19.954507] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:00.707 [2024-11-18 23:08:19.954625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.707 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.708 23:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.708 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.708 "name": "raid_bdev1", 00:13:00.708 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:00.708 "strip_size_kb": 0, 00:13:00.708 "state": "online", 00:13:00.708 "raid_level": "raid1", 00:13:00.708 "superblock": true, 00:13:00.708 "num_base_bdevs": 4, 00:13:00.708 "num_base_bdevs_discovered": 4, 00:13:00.708 "num_base_bdevs_operational": 4, 00:13:00.708 "base_bdevs_list": [ 00:13:00.708 { 00:13:00.708 "name": "BaseBdev1", 00:13:00.708 "uuid": "2b0a3378-2239-52d2-a955-022c69a0b3d4", 00:13:00.708 "is_configured": true, 00:13:00.708 "data_offset": 2048, 00:13:00.708 "data_size": 63488 00:13:00.708 }, 00:13:00.708 { 00:13:00.708 "name": "BaseBdev2", 00:13:00.708 "uuid": "438b0849-12d3-5e37-96ef-7ca249b1ee65", 00:13:00.708 "is_configured": true, 00:13:00.708 "data_offset": 2048, 00:13:00.708 "data_size": 63488 00:13:00.708 }, 00:13:00.708 { 00:13:00.708 "name": "BaseBdev3", 00:13:00.708 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:00.708 "is_configured": true, 00:13:00.708 "data_offset": 2048, 00:13:00.708 "data_size": 63488 00:13:00.708 }, 00:13:00.708 { 00:13:00.708 "name": "BaseBdev4", 00:13:00.708 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:00.708 "is_configured": true, 00:13:00.708 "data_offset": 2048, 00:13:00.708 "data_size": 63488 00:13:00.708 } 00:13:00.708 ] 00:13:00.708 }' 00:13:00.708 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.708 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.278 [2024-11-18 23:08:20.423702] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.278 [2024-11-18 23:08:20.519390] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.278 "name": "raid_bdev1", 00:13:01.278 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:01.278 "strip_size_kb": 0, 00:13:01.278 "state": "online", 00:13:01.278 "raid_level": "raid1", 00:13:01.278 "superblock": true, 00:13:01.278 "num_base_bdevs": 4, 00:13:01.278 "num_base_bdevs_discovered": 3, 00:13:01.278 "num_base_bdevs_operational": 3, 00:13:01.278 "base_bdevs_list": [ 00:13:01.278 { 00:13:01.278 "name": null, 00:13:01.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.278 "is_configured": false, 00:13:01.278 "data_offset": 0, 00:13:01.278 "data_size": 63488 00:13:01.278 }, 00:13:01.278 { 00:13:01.278 "name": "BaseBdev2", 00:13:01.278 "uuid": "438b0849-12d3-5e37-96ef-7ca249b1ee65", 00:13:01.278 "is_configured": true, 00:13:01.278 "data_offset": 2048, 00:13:01.278 "data_size": 63488 00:13:01.278 }, 00:13:01.278 { 00:13:01.278 "name": "BaseBdev3", 00:13:01.278 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:01.278 "is_configured": true, 00:13:01.278 "data_offset": 2048, 00:13:01.278 "data_size": 63488 00:13:01.278 }, 00:13:01.278 { 00:13:01.278 "name": "BaseBdev4", 00:13:01.278 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:01.278 "is_configured": true, 00:13:01.278 "data_offset": 2048, 00:13:01.278 "data_size": 63488 00:13:01.278 } 00:13:01.278 ] 00:13:01.278 }' 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.278 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.278 [2024-11-18 23:08:20.609385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:01.278 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:01.278 Zero copy mechanism will not be used. 00:13:01.278 Running I/O for 60 seconds... 00:13:01.847 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.847 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.848 23:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.848 [2024-11-18 23:08:21.003161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.848 23:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.848 23:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:01.848 [2024-11-18 23:08:21.043537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:01.848 [2024-11-18 23:08:21.045537] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.107 [2024-11-18 23:08:21.310608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:02.107 [2024-11-18 23:08:21.310932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:02.367 205.00 IOPS, 615.00 MiB/s [2024-11-18T23:08:21.745Z] [2024-11-18 23:08:21.637738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:02.628 [2024-11-18 23:08:21.760304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.889 [2024-11-18 23:08:22.094916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.889 "name": "raid_bdev1", 00:13:02.889 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:02.889 "strip_size_kb": 0, 00:13:02.889 "state": "online", 00:13:02.889 "raid_level": "raid1", 00:13:02.889 "superblock": true, 00:13:02.889 "num_base_bdevs": 4, 00:13:02.889 "num_base_bdevs_discovered": 4, 00:13:02.889 "num_base_bdevs_operational": 4, 00:13:02.889 "process": { 00:13:02.889 "type": "rebuild", 00:13:02.889 "target": "spare", 00:13:02.889 "progress": { 00:13:02.889 "blocks": 12288, 00:13:02.889 "percent": 19 00:13:02.889 } 00:13:02.889 }, 00:13:02.889 "base_bdevs_list": [ 00:13:02.889 { 00:13:02.889 "name": "spare", 00:13:02.889 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:02.889 "is_configured": true, 00:13:02.889 "data_offset": 2048, 00:13:02.889 "data_size": 63488 00:13:02.889 }, 00:13:02.889 { 00:13:02.889 "name": "BaseBdev2", 00:13:02.889 "uuid": "438b0849-12d3-5e37-96ef-7ca249b1ee65", 00:13:02.889 "is_configured": true, 00:13:02.889 "data_offset": 2048, 00:13:02.889 "data_size": 63488 00:13:02.889 }, 00:13:02.889 { 00:13:02.889 "name": "BaseBdev3", 00:13:02.889 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:02.889 "is_configured": true, 00:13:02.889 "data_offset": 2048, 00:13:02.889 "data_size": 63488 00:13:02.889 }, 00:13:02.889 { 00:13:02.889 "name": "BaseBdev4", 00:13:02.889 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:02.889 "is_configured": true, 00:13:02.889 "data_offset": 2048, 00:13:02.889 "data_size": 63488 00:13:02.889 } 00:13:02.889 ] 00:13:02.889 }' 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.889 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.889 [2024-11-18 23:08:22.209200] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.149 [2024-11-18 23:08:22.317134] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:03.149 [2024-11-18 23:08:22.326462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.149 [2024-11-18 23:08:22.326502] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.149 [2024-11-18 23:08:22.326526] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:03.149 [2024-11-18 23:08:22.337212] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.149 "name": "raid_bdev1", 00:13:03.149 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:03.149 "strip_size_kb": 0, 00:13:03.149 "state": "online", 00:13:03.149 "raid_level": "raid1", 00:13:03.149 "superblock": true, 00:13:03.149 "num_base_bdevs": 4, 00:13:03.149 "num_base_bdevs_discovered": 3, 00:13:03.149 "num_base_bdevs_operational": 3, 00:13:03.149 "base_bdevs_list": [ 00:13:03.149 { 00:13:03.149 "name": null, 00:13:03.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.149 "is_configured": false, 00:13:03.149 "data_offset": 0, 00:13:03.149 "data_size": 63488 00:13:03.149 }, 00:13:03.149 { 00:13:03.149 "name": "BaseBdev2", 00:13:03.149 "uuid": "438b0849-12d3-5e37-96ef-7ca249b1ee65", 00:13:03.149 "is_configured": true, 00:13:03.149 "data_offset": 2048, 00:13:03.149 "data_size": 63488 00:13:03.149 }, 00:13:03.149 { 00:13:03.149 "name": "BaseBdev3", 00:13:03.149 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:03.149 "is_configured": true, 00:13:03.149 "data_offset": 2048, 00:13:03.149 "data_size": 63488 00:13:03.149 }, 00:13:03.149 { 00:13:03.149 "name": "BaseBdev4", 00:13:03.149 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:03.149 "is_configured": true, 00:13:03.149 "data_offset": 2048, 00:13:03.149 "data_size": 63488 00:13:03.149 } 00:13:03.149 ] 00:13:03.149 }' 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.149 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.670 159.00 IOPS, 477.00 MiB/s [2024-11-18T23:08:23.048Z] 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.670 "name": "raid_bdev1", 00:13:03.670 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:03.670 "strip_size_kb": 0, 00:13:03.670 "state": "online", 00:13:03.670 "raid_level": "raid1", 00:13:03.670 "superblock": true, 00:13:03.670 "num_base_bdevs": 4, 00:13:03.670 "num_base_bdevs_discovered": 3, 00:13:03.670 "num_base_bdevs_operational": 3, 00:13:03.670 "base_bdevs_list": [ 00:13:03.670 { 00:13:03.670 "name": null, 00:13:03.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.670 "is_configured": false, 00:13:03.670 "data_offset": 0, 00:13:03.670 "data_size": 63488 00:13:03.670 }, 00:13:03.670 { 00:13:03.670 "name": "BaseBdev2", 00:13:03.670 "uuid": "438b0849-12d3-5e37-96ef-7ca249b1ee65", 00:13:03.670 "is_configured": true, 00:13:03.670 "data_offset": 2048, 00:13:03.670 "data_size": 63488 00:13:03.670 }, 00:13:03.670 { 00:13:03.670 "name": "BaseBdev3", 00:13:03.670 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:03.670 "is_configured": true, 00:13:03.670 "data_offset": 2048, 00:13:03.670 "data_size": 63488 00:13:03.670 }, 00:13:03.670 { 00:13:03.670 "name": "BaseBdev4", 00:13:03.670 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:03.670 "is_configured": true, 00:13:03.670 "data_offset": 2048, 00:13:03.670 "data_size": 63488 00:13:03.670 } 00:13:03.670 ] 00:13:03.670 }' 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.670 23:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.670 [2024-11-18 23:08:22.994112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.670 23:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.670 23:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:03.929 [2024-11-18 23:08:23.058105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:03.930 [2024-11-18 23:08:23.060098] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.930 [2024-11-18 23:08:23.161989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:03.930 [2024-11-18 23:08:23.162335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:03.930 [2024-11-18 23:08:23.269995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:03.930 [2024-11-18 23:08:23.270586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:04.502 159.00 IOPS, 477.00 MiB/s [2024-11-18T23:08:23.880Z] [2024-11-18 23:08:23.745148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:04.765 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.766 "name": "raid_bdev1", 00:13:04.766 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:04.766 "strip_size_kb": 0, 00:13:04.766 "state": "online", 00:13:04.766 "raid_level": "raid1", 00:13:04.766 "superblock": true, 00:13:04.766 "num_base_bdevs": 4, 00:13:04.766 "num_base_bdevs_discovered": 4, 00:13:04.766 "num_base_bdevs_operational": 4, 00:13:04.766 "process": { 00:13:04.766 "type": "rebuild", 00:13:04.766 "target": "spare", 00:13:04.766 "progress": { 00:13:04.766 "blocks": 12288, 00:13:04.766 "percent": 19 00:13:04.766 } 00:13:04.766 }, 00:13:04.766 "base_bdevs_list": [ 00:13:04.766 { 00:13:04.766 "name": "spare", 00:13:04.766 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:04.766 "is_configured": true, 00:13:04.766 "data_offset": 2048, 00:13:04.766 "data_size": 63488 00:13:04.766 }, 00:13:04.766 { 00:13:04.766 "name": "BaseBdev2", 00:13:04.766 "uuid": "438b0849-12d3-5e37-96ef-7ca249b1ee65", 00:13:04.766 "is_configured": true, 00:13:04.766 "data_offset": 2048, 00:13:04.766 "data_size": 63488 00:13:04.766 }, 00:13:04.766 { 00:13:04.766 "name": "BaseBdev3", 00:13:04.766 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:04.766 "is_configured": true, 00:13:04.766 "data_offset": 2048, 00:13:04.766 "data_size": 63488 00:13:04.766 }, 00:13:04.766 { 00:13:04.766 "name": "BaseBdev4", 00:13:04.766 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:04.766 "is_configured": true, 00:13:04.766 "data_offset": 2048, 00:13:04.766 "data_size": 63488 00:13:04.766 } 00:13:04.766 ] 00:13:04.766 }' 00:13:04.766 [2024-11-18 23:08:24.082751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.766 [2024-11-18 23:08:24.083919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.766 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.027 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.027 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:05.027 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:05.027 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:05.027 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:05.027 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:05.027 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:05.027 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:05.027 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.027 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.027 [2024-11-18 23:08:24.192757] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:05.027 [2024-11-18 23:08:24.305865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:05.288 [2024-11-18 23:08:24.507839] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:13:05.288 [2024-11-18 23:08:24.507877] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.288 "name": "raid_bdev1", 00:13:05.288 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:05.288 "strip_size_kb": 0, 00:13:05.288 "state": "online", 00:13:05.288 "raid_level": "raid1", 00:13:05.288 "superblock": true, 00:13:05.288 "num_base_bdevs": 4, 00:13:05.288 "num_base_bdevs_discovered": 3, 00:13:05.288 "num_base_bdevs_operational": 3, 00:13:05.288 "process": { 00:13:05.288 "type": "rebuild", 00:13:05.288 "target": "spare", 00:13:05.288 "progress": { 00:13:05.288 "blocks": 16384, 00:13:05.288 "percent": 25 00:13:05.288 } 00:13:05.288 }, 00:13:05.288 "base_bdevs_list": [ 00:13:05.288 { 00:13:05.288 "name": "spare", 00:13:05.288 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:05.288 "is_configured": true, 00:13:05.288 "data_offset": 2048, 00:13:05.288 "data_size": 63488 00:13:05.288 }, 00:13:05.288 { 00:13:05.288 "name": null, 00:13:05.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.288 "is_configured": false, 00:13:05.288 "data_offset": 0, 00:13:05.288 "data_size": 63488 00:13:05.288 }, 00:13:05.288 { 00:13:05.288 "name": "BaseBdev3", 00:13:05.288 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:05.288 "is_configured": true, 00:13:05.288 "data_offset": 2048, 00:13:05.288 "data_size": 63488 00:13:05.288 }, 00:13:05.288 { 00:13:05.288 "name": "BaseBdev4", 00:13:05.288 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:05.288 "is_configured": true, 00:13:05.288 "data_offset": 2048, 00:13:05.288 "data_size": 63488 00:13:05.288 } 00:13:05.288 ] 00:13:05.288 }' 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.288 135.75 IOPS, 407.25 MiB/s [2024-11-18T23:08:24.666Z] 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=400 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.288 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.549 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.549 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.549 "name": "raid_bdev1", 00:13:05.549 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:05.549 "strip_size_kb": 0, 00:13:05.549 "state": "online", 00:13:05.549 "raid_level": "raid1", 00:13:05.549 "superblock": true, 00:13:05.549 "num_base_bdevs": 4, 00:13:05.549 "num_base_bdevs_discovered": 3, 00:13:05.549 "num_base_bdevs_operational": 3, 00:13:05.549 "process": { 00:13:05.549 "type": "rebuild", 00:13:05.549 "target": "spare", 00:13:05.549 "progress": { 00:13:05.549 "blocks": 18432, 00:13:05.549 "percent": 29 00:13:05.549 } 00:13:05.549 }, 00:13:05.549 "base_bdevs_list": [ 00:13:05.549 { 00:13:05.549 "name": "spare", 00:13:05.549 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:05.549 "is_configured": true, 00:13:05.549 "data_offset": 2048, 00:13:05.549 "data_size": 63488 00:13:05.549 }, 00:13:05.549 { 00:13:05.549 "name": null, 00:13:05.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.549 "is_configured": false, 00:13:05.549 "data_offset": 0, 00:13:05.549 "data_size": 63488 00:13:05.549 }, 00:13:05.549 { 00:13:05.549 "name": "BaseBdev3", 00:13:05.549 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:05.549 "is_configured": true, 00:13:05.549 "data_offset": 2048, 00:13:05.549 "data_size": 63488 00:13:05.549 }, 00:13:05.549 { 00:13:05.549 "name": "BaseBdev4", 00:13:05.549 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:05.549 "is_configured": true, 00:13:05.549 "data_offset": 2048, 00:13:05.549 "data_size": 63488 00:13:05.549 } 00:13:05.549 ] 00:13:05.549 }' 00:13:05.549 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.549 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.549 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.549 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.549 23:08:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:05.549 [2024-11-18 23:08:24.843791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:05.810 [2024-11-18 23:08:25.054181] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:05.810 [2024-11-18 23:08:25.168373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:06.069 [2024-11-18 23:08:25.388413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:06.589 119.60 IOPS, 358.80 MiB/s [2024-11-18T23:08:25.967Z] 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.589 "name": "raid_bdev1", 00:13:06.589 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:06.589 "strip_size_kb": 0, 00:13:06.589 "state": "online", 00:13:06.589 "raid_level": "raid1", 00:13:06.589 "superblock": true, 00:13:06.589 "num_base_bdevs": 4, 00:13:06.589 "num_base_bdevs_discovered": 3, 00:13:06.589 "num_base_bdevs_operational": 3, 00:13:06.589 "process": { 00:13:06.589 "type": "rebuild", 00:13:06.589 "target": "spare", 00:13:06.589 "progress": { 00:13:06.589 "blocks": 38912, 00:13:06.589 "percent": 61 00:13:06.589 } 00:13:06.589 }, 00:13:06.589 "base_bdevs_list": [ 00:13:06.589 { 00:13:06.589 "name": "spare", 00:13:06.589 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:06.589 "is_configured": true, 00:13:06.589 "data_offset": 2048, 00:13:06.589 "data_size": 63488 00:13:06.589 }, 00:13:06.589 { 00:13:06.589 "name": null, 00:13:06.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.589 "is_configured": false, 00:13:06.589 "data_offset": 0, 00:13:06.589 "data_size": 63488 00:13:06.589 }, 00:13:06.589 { 00:13:06.589 "name": "BaseBdev3", 00:13:06.589 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:06.589 "is_configured": true, 00:13:06.589 "data_offset": 2048, 00:13:06.589 "data_size": 63488 00:13:06.589 }, 00:13:06.589 { 00:13:06.589 "name": "BaseBdev4", 00:13:06.589 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:06.589 "is_configured": true, 00:13:06.589 "data_offset": 2048, 00:13:06.589 "data_size": 63488 00:13:06.589 } 00:13:06.589 ] 00:13:06.589 }' 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.589 [2024-11-18 23:08:25.835816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.589 23:08:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:06.850 [2024-11-18 23:08:26.055483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:07.110 [2024-11-18 23:08:26.391202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:07.371 106.33 IOPS, 319.00 MiB/s [2024-11-18T23:08:26.749Z] [2024-11-18 23:08:26.719249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:07.629 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.629 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.630 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.630 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.630 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.630 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.630 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.630 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.630 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.630 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.630 [2024-11-18 23:08:26.938766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:07.630 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.630 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.630 "name": "raid_bdev1", 00:13:07.630 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:07.630 "strip_size_kb": 0, 00:13:07.630 "state": "online", 00:13:07.630 "raid_level": "raid1", 00:13:07.630 "superblock": true, 00:13:07.630 "num_base_bdevs": 4, 00:13:07.630 "num_base_bdevs_discovered": 3, 00:13:07.630 "num_base_bdevs_operational": 3, 00:13:07.630 "process": { 00:13:07.630 "type": "rebuild", 00:13:07.630 "target": "spare", 00:13:07.630 "progress": { 00:13:07.630 "blocks": 57344, 00:13:07.630 "percent": 90 00:13:07.630 } 00:13:07.630 }, 00:13:07.630 "base_bdevs_list": [ 00:13:07.630 { 00:13:07.630 "name": "spare", 00:13:07.630 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:07.630 "is_configured": true, 00:13:07.630 "data_offset": 2048, 00:13:07.630 "data_size": 63488 00:13:07.630 }, 00:13:07.630 { 00:13:07.630 "name": null, 00:13:07.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.630 "is_configured": false, 00:13:07.630 "data_offset": 0, 00:13:07.630 "data_size": 63488 00:13:07.630 }, 00:13:07.630 { 00:13:07.630 "name": "BaseBdev3", 00:13:07.630 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:07.630 "is_configured": true, 00:13:07.630 "data_offset": 2048, 00:13:07.630 "data_size": 63488 00:13:07.630 }, 00:13:07.630 { 00:13:07.630 "name": "BaseBdev4", 00:13:07.630 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:07.630 "is_configured": true, 00:13:07.630 "data_offset": 2048, 00:13:07.630 "data_size": 63488 00:13:07.630 } 00:13:07.630 ] 00:13:07.630 }' 00:13:07.630 23:08:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.889 23:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.889 23:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.889 23:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.889 23:08:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.889 [2024-11-18 23:08:27.260245] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:08.149 [2024-11-18 23:08:27.360047] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:08.150 [2024-11-18 23:08:27.367674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.977 95.29 IOPS, 285.86 MiB/s [2024-11-18T23:08:28.355Z] 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.977 "name": "raid_bdev1", 00:13:08.977 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:08.977 "strip_size_kb": 0, 00:13:08.977 "state": "online", 00:13:08.977 "raid_level": "raid1", 00:13:08.977 "superblock": true, 00:13:08.977 "num_base_bdevs": 4, 00:13:08.977 "num_base_bdevs_discovered": 3, 00:13:08.977 "num_base_bdevs_operational": 3, 00:13:08.977 "base_bdevs_list": [ 00:13:08.977 { 00:13:08.977 "name": "spare", 00:13:08.977 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:08.977 "is_configured": true, 00:13:08.977 "data_offset": 2048, 00:13:08.977 "data_size": 63488 00:13:08.977 }, 00:13:08.977 { 00:13:08.977 "name": null, 00:13:08.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.977 "is_configured": false, 00:13:08.977 "data_offset": 0, 00:13:08.977 "data_size": 63488 00:13:08.977 }, 00:13:08.977 { 00:13:08.977 "name": "BaseBdev3", 00:13:08.977 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:08.977 "is_configured": true, 00:13:08.977 "data_offset": 2048, 00:13:08.977 "data_size": 63488 00:13:08.977 }, 00:13:08.977 { 00:13:08.977 "name": "BaseBdev4", 00:13:08.977 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:08.977 "is_configured": true, 00:13:08.977 "data_offset": 2048, 00:13:08.977 "data_size": 63488 00:13:08.977 } 00:13:08.977 ] 00:13:08.977 }' 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.977 "name": "raid_bdev1", 00:13:08.977 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:08.977 "strip_size_kb": 0, 00:13:08.977 "state": "online", 00:13:08.977 "raid_level": "raid1", 00:13:08.977 "superblock": true, 00:13:08.977 "num_base_bdevs": 4, 00:13:08.977 "num_base_bdevs_discovered": 3, 00:13:08.977 "num_base_bdevs_operational": 3, 00:13:08.977 "base_bdevs_list": [ 00:13:08.977 { 00:13:08.977 "name": "spare", 00:13:08.977 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:08.977 "is_configured": true, 00:13:08.977 "data_offset": 2048, 00:13:08.977 "data_size": 63488 00:13:08.977 }, 00:13:08.977 { 00:13:08.977 "name": null, 00:13:08.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.977 "is_configured": false, 00:13:08.977 "data_offset": 0, 00:13:08.977 "data_size": 63488 00:13:08.977 }, 00:13:08.977 { 00:13:08.977 "name": "BaseBdev3", 00:13:08.977 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:08.977 "is_configured": true, 00:13:08.977 "data_offset": 2048, 00:13:08.977 "data_size": 63488 00:13:08.977 }, 00:13:08.977 { 00:13:08.977 "name": "BaseBdev4", 00:13:08.977 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:08.977 "is_configured": true, 00:13:08.977 "data_offset": 2048, 00:13:08.977 "data_size": 63488 00:13:08.977 } 00:13:08.977 ] 00:13:08.977 }' 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.977 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.236 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.236 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:09.236 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.236 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.236 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.236 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.237 "name": "raid_bdev1", 00:13:09.237 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:09.237 "strip_size_kb": 0, 00:13:09.237 "state": "online", 00:13:09.237 "raid_level": "raid1", 00:13:09.237 "superblock": true, 00:13:09.237 "num_base_bdevs": 4, 00:13:09.237 "num_base_bdevs_discovered": 3, 00:13:09.237 "num_base_bdevs_operational": 3, 00:13:09.237 "base_bdevs_list": [ 00:13:09.237 { 00:13:09.237 "name": "spare", 00:13:09.237 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:09.237 "is_configured": true, 00:13:09.237 "data_offset": 2048, 00:13:09.237 "data_size": 63488 00:13:09.237 }, 00:13:09.237 { 00:13:09.237 "name": null, 00:13:09.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.237 "is_configured": false, 00:13:09.237 "data_offset": 0, 00:13:09.237 "data_size": 63488 00:13:09.237 }, 00:13:09.237 { 00:13:09.237 "name": "BaseBdev3", 00:13:09.237 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:09.237 "is_configured": true, 00:13:09.237 "data_offset": 2048, 00:13:09.237 "data_size": 63488 00:13:09.237 }, 00:13:09.237 { 00:13:09.237 "name": "BaseBdev4", 00:13:09.237 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:09.237 "is_configured": true, 00:13:09.237 "data_offset": 2048, 00:13:09.237 "data_size": 63488 00:13:09.237 } 00:13:09.237 ] 00:13:09.237 }' 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.237 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.497 86.75 IOPS, 260.25 MiB/s [2024-11-18T23:08:28.875Z] 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.497 [2024-11-18 23:08:28.708738] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.497 [2024-11-18 23:08:28.708809] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.497 00:13:09.497 Latency(us) 00:13:09.497 [2024-11-18T23:08:28.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.497 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:09.497 raid_bdev1 : 8.17 85.45 256.34 0.00 0.00 16748.76 284.39 115847.04 00:13:09.497 [2024-11-18T23:08:28.875Z] =================================================================================================================== 00:13:09.497 [2024-11-18T23:08:28.875Z] Total : 85.45 256.34 0.00 0.00 16748.76 284.39 115847.04 00:13:09.497 [2024-11-18 23:08:28.767593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.497 [2024-11-18 23:08:28.767662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.497 [2024-11-18 23:08:28.767796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.497 [2024-11-18 23:08:28.767862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:09.497 { 00:13:09.497 "results": [ 00:13:09.497 { 00:13:09.497 "job": "raid_bdev1", 00:13:09.497 "core_mask": "0x1", 00:13:09.497 "workload": "randrw", 00:13:09.497 "percentage": 50, 00:13:09.497 "status": "finished", 00:13:09.497 "queue_depth": 2, 00:13:09.497 "io_size": 3145728, 00:13:09.497 "runtime": 8.168883, 00:13:09.497 "iops": 85.44619870305402, 00:13:09.497 "mibps": 256.33859610916204, 00:13:09.497 "io_failed": 0, 00:13:09.497 "io_timeout": 0, 00:13:09.497 "avg_latency_us": 16748.763403861314, 00:13:09.497 "min_latency_us": 284.3947598253275, 00:13:09.497 "max_latency_us": 115847.04279475982 00:13:09.497 } 00:13:09.497 ], 00:13:09.497 "core_count": 1 00:13:09.497 } 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.497 23:08:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:09.757 /dev/nbd0 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.757 1+0 records in 00:13:09.757 1+0 records out 00:13:09.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645671 s, 6.3 MB/s 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:09.757 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.758 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:10.016 /dev/nbd1 00:13:10.016 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:10.016 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:10.016 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:10.016 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:10.016 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:10.016 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:10.016 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.017 1+0 records in 00:13:10.017 1+0 records out 00:13:10.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355969 s, 11.5 MB/s 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.017 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:10.275 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:10.275 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.275 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:10.275 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.275 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.276 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:10.534 /dev/nbd1 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.534 1+0 records in 00:13:10.534 1+0 records out 00:13:10.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549031 s, 7.5 MB/s 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.534 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:10.794 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:10.794 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.794 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:10.794 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.794 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:10.794 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.794 23:08:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.794 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.055 [2024-11-18 23:08:30.368238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:11.055 [2024-11-18 23:08:30.368356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.055 [2024-11-18 23:08:30.368399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:11.055 [2024-11-18 23:08:30.368435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.055 [2024-11-18 23:08:30.370619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.055 [2024-11-18 23:08:30.370706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:11.055 [2024-11-18 23:08:30.370824] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:11.055 [2024-11-18 23:08:30.370885] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.055 [2024-11-18 23:08:30.371049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.055 [2024-11-18 23:08:30.371184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:11.055 spare 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.055 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.315 [2024-11-18 23:08:30.471141] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:11.315 [2024-11-18 23:08:30.471204] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:11.315 [2024-11-18 23:08:30.471524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:13:11.315 [2024-11-18 23:08:30.471688] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:11.315 [2024-11-18 23:08:30.471735] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:11.315 [2024-11-18 23:08:30.471889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.315 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.315 "name": "raid_bdev1", 00:13:11.315 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:11.315 "strip_size_kb": 0, 00:13:11.315 "state": "online", 00:13:11.315 "raid_level": "raid1", 00:13:11.315 "superblock": true, 00:13:11.315 "num_base_bdevs": 4, 00:13:11.315 "num_base_bdevs_discovered": 3, 00:13:11.315 "num_base_bdevs_operational": 3, 00:13:11.315 "base_bdevs_list": [ 00:13:11.315 { 00:13:11.315 "name": "spare", 00:13:11.315 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:11.315 "is_configured": true, 00:13:11.315 "data_offset": 2048, 00:13:11.316 "data_size": 63488 00:13:11.316 }, 00:13:11.316 { 00:13:11.316 "name": null, 00:13:11.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.316 "is_configured": false, 00:13:11.316 "data_offset": 2048, 00:13:11.316 "data_size": 63488 00:13:11.316 }, 00:13:11.316 { 00:13:11.316 "name": "BaseBdev3", 00:13:11.316 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:11.316 "is_configured": true, 00:13:11.316 "data_offset": 2048, 00:13:11.316 "data_size": 63488 00:13:11.316 }, 00:13:11.316 { 00:13:11.316 "name": "BaseBdev4", 00:13:11.316 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:11.316 "is_configured": true, 00:13:11.316 "data_offset": 2048, 00:13:11.316 "data_size": 63488 00:13:11.316 } 00:13:11.316 ] 00:13:11.316 }' 00:13:11.316 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.316 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.576 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.576 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.576 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.576 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.576 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.576 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.576 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.576 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.576 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.576 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.576 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.576 "name": "raid_bdev1", 00:13:11.576 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:11.576 "strip_size_kb": 0, 00:13:11.576 "state": "online", 00:13:11.576 "raid_level": "raid1", 00:13:11.576 "superblock": true, 00:13:11.576 "num_base_bdevs": 4, 00:13:11.576 "num_base_bdevs_discovered": 3, 00:13:11.576 "num_base_bdevs_operational": 3, 00:13:11.576 "base_bdevs_list": [ 00:13:11.576 { 00:13:11.576 "name": "spare", 00:13:11.576 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:11.576 "is_configured": true, 00:13:11.576 "data_offset": 2048, 00:13:11.576 "data_size": 63488 00:13:11.576 }, 00:13:11.576 { 00:13:11.576 "name": null, 00:13:11.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.576 "is_configured": false, 00:13:11.576 "data_offset": 2048, 00:13:11.576 "data_size": 63488 00:13:11.576 }, 00:13:11.576 { 00:13:11.576 "name": "BaseBdev3", 00:13:11.576 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:11.576 "is_configured": true, 00:13:11.576 "data_offset": 2048, 00:13:11.576 "data_size": 63488 00:13:11.576 }, 00:13:11.576 { 00:13:11.576 "name": "BaseBdev4", 00:13:11.576 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:11.576 "is_configured": true, 00:13:11.576 "data_offset": 2048, 00:13:11.576 "data_size": 63488 00:13:11.576 } 00:13:11.576 ] 00:13:11.576 }' 00:13:11.838 23:08:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.838 [2024-11-18 23:08:31.111381] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.838 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.838 "name": "raid_bdev1", 00:13:11.838 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:11.838 "strip_size_kb": 0, 00:13:11.838 "state": "online", 00:13:11.838 "raid_level": "raid1", 00:13:11.839 "superblock": true, 00:13:11.839 "num_base_bdevs": 4, 00:13:11.839 "num_base_bdevs_discovered": 2, 00:13:11.839 "num_base_bdevs_operational": 2, 00:13:11.839 "base_bdevs_list": [ 00:13:11.839 { 00:13:11.839 "name": null, 00:13:11.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.839 "is_configured": false, 00:13:11.839 "data_offset": 0, 00:13:11.839 "data_size": 63488 00:13:11.839 }, 00:13:11.839 { 00:13:11.839 "name": null, 00:13:11.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.839 "is_configured": false, 00:13:11.839 "data_offset": 2048, 00:13:11.839 "data_size": 63488 00:13:11.839 }, 00:13:11.839 { 00:13:11.839 "name": "BaseBdev3", 00:13:11.839 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:11.839 "is_configured": true, 00:13:11.839 "data_offset": 2048, 00:13:11.839 "data_size": 63488 00:13:11.839 }, 00:13:11.839 { 00:13:11.839 "name": "BaseBdev4", 00:13:11.839 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:11.839 "is_configured": true, 00:13:11.839 "data_offset": 2048, 00:13:11.839 "data_size": 63488 00:13:11.839 } 00:13:11.839 ] 00:13:11.839 }' 00:13:11.839 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.839 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.440 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.440 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.440 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.440 [2024-11-18 23:08:31.587402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.440 [2024-11-18 23:08:31.587557] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:12.440 [2024-11-18 23:08:31.587572] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:12.440 [2024-11-18 23:08:31.587603] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.440 [2024-11-18 23:08:31.591108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:13:12.440 [2024-11-18 23:08:31.593001] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.440 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.440 23:08:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.380 "name": "raid_bdev1", 00:13:13.380 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:13.380 "strip_size_kb": 0, 00:13:13.380 "state": "online", 00:13:13.380 "raid_level": "raid1", 00:13:13.380 "superblock": true, 00:13:13.380 "num_base_bdevs": 4, 00:13:13.380 "num_base_bdevs_discovered": 3, 00:13:13.380 "num_base_bdevs_operational": 3, 00:13:13.380 "process": { 00:13:13.380 "type": "rebuild", 00:13:13.380 "target": "spare", 00:13:13.380 "progress": { 00:13:13.380 "blocks": 20480, 00:13:13.380 "percent": 32 00:13:13.380 } 00:13:13.380 }, 00:13:13.380 "base_bdevs_list": [ 00:13:13.380 { 00:13:13.380 "name": "spare", 00:13:13.380 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:13.380 "is_configured": true, 00:13:13.380 "data_offset": 2048, 00:13:13.380 "data_size": 63488 00:13:13.380 }, 00:13:13.380 { 00:13:13.380 "name": null, 00:13:13.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.380 "is_configured": false, 00:13:13.380 "data_offset": 2048, 00:13:13.380 "data_size": 63488 00:13:13.380 }, 00:13:13.380 { 00:13:13.380 "name": "BaseBdev3", 00:13:13.380 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:13.380 "is_configured": true, 00:13:13.380 "data_offset": 2048, 00:13:13.380 "data_size": 63488 00:13:13.380 }, 00:13:13.380 { 00:13:13.380 "name": "BaseBdev4", 00:13:13.380 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:13.380 "is_configured": true, 00:13:13.380 "data_offset": 2048, 00:13:13.380 "data_size": 63488 00:13:13.380 } 00:13:13.380 ] 00:13:13.380 }' 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.380 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.380 [2024-11-18 23:08:32.755785] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.639 [2024-11-18 23:08:32.797000] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:13.639 [2024-11-18 23:08:32.797049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.639 [2024-11-18 23:08:32.797068] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.639 [2024-11-18 23:08:32.797075] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.639 "name": "raid_bdev1", 00:13:13.639 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:13.639 "strip_size_kb": 0, 00:13:13.639 "state": "online", 00:13:13.639 "raid_level": "raid1", 00:13:13.639 "superblock": true, 00:13:13.639 "num_base_bdevs": 4, 00:13:13.639 "num_base_bdevs_discovered": 2, 00:13:13.639 "num_base_bdevs_operational": 2, 00:13:13.639 "base_bdevs_list": [ 00:13:13.639 { 00:13:13.639 "name": null, 00:13:13.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.639 "is_configured": false, 00:13:13.639 "data_offset": 0, 00:13:13.639 "data_size": 63488 00:13:13.639 }, 00:13:13.639 { 00:13:13.639 "name": null, 00:13:13.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.639 "is_configured": false, 00:13:13.639 "data_offset": 2048, 00:13:13.639 "data_size": 63488 00:13:13.639 }, 00:13:13.639 { 00:13:13.639 "name": "BaseBdev3", 00:13:13.639 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:13.639 "is_configured": true, 00:13:13.639 "data_offset": 2048, 00:13:13.639 "data_size": 63488 00:13:13.639 }, 00:13:13.639 { 00:13:13.639 "name": "BaseBdev4", 00:13:13.639 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:13.639 "is_configured": true, 00:13:13.639 "data_offset": 2048, 00:13:13.639 "data_size": 63488 00:13:13.639 } 00:13:13.639 ] 00:13:13.639 }' 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.639 23:08:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.208 23:08:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:14.208 23:08:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.208 23:08:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.208 [2024-11-18 23:08:33.288372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:14.208 [2024-11-18 23:08:33.288426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.208 [2024-11-18 23:08:33.288455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:14.208 [2024-11-18 23:08:33.288463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.208 [2024-11-18 23:08:33.288900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.208 [2024-11-18 23:08:33.288918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:14.208 [2024-11-18 23:08:33.288993] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:14.208 [2024-11-18 23:08:33.289004] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:14.208 [2024-11-18 23:08:33.289015] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:14.208 [2024-11-18 23:08:33.289044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:14.208 [2024-11-18 23:08:33.292174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:14.208 [2024-11-18 23:08:33.293982] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.208 spare 00:13:14.208 23:08:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.208 23:08:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.148 "name": "raid_bdev1", 00:13:15.148 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:15.148 "strip_size_kb": 0, 00:13:15.148 "state": "online", 00:13:15.148 "raid_level": "raid1", 00:13:15.148 "superblock": true, 00:13:15.148 "num_base_bdevs": 4, 00:13:15.148 "num_base_bdevs_discovered": 3, 00:13:15.148 "num_base_bdevs_operational": 3, 00:13:15.148 "process": { 00:13:15.148 "type": "rebuild", 00:13:15.148 "target": "spare", 00:13:15.148 "progress": { 00:13:15.148 "blocks": 20480, 00:13:15.148 "percent": 32 00:13:15.148 } 00:13:15.148 }, 00:13:15.148 "base_bdevs_list": [ 00:13:15.148 { 00:13:15.148 "name": "spare", 00:13:15.148 "uuid": "43204d2e-03e9-5027-817f-05e0328abf90", 00:13:15.148 "is_configured": true, 00:13:15.148 "data_offset": 2048, 00:13:15.148 "data_size": 63488 00:13:15.148 }, 00:13:15.148 { 00:13:15.148 "name": null, 00:13:15.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.148 "is_configured": false, 00:13:15.148 "data_offset": 2048, 00:13:15.148 "data_size": 63488 00:13:15.148 }, 00:13:15.148 { 00:13:15.148 "name": "BaseBdev3", 00:13:15.148 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:15.148 "is_configured": true, 00:13:15.148 "data_offset": 2048, 00:13:15.148 "data_size": 63488 00:13:15.148 }, 00:13:15.148 { 00:13:15.148 "name": "BaseBdev4", 00:13:15.148 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:15.148 "is_configured": true, 00:13:15.148 "data_offset": 2048, 00:13:15.148 "data_size": 63488 00:13:15.148 } 00:13:15.148 ] 00:13:15.148 }' 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.148 [2024-11-18 23:08:34.456708] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:15.148 [2024-11-18 23:08:34.497920] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:15.148 [2024-11-18 23:08:34.497972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.148 [2024-11-18 23:08:34.497986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:15.148 [2024-11-18 23:08:34.497995] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.148 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.409 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.409 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.409 "name": "raid_bdev1", 00:13:15.409 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:15.409 "strip_size_kb": 0, 00:13:15.409 "state": "online", 00:13:15.409 "raid_level": "raid1", 00:13:15.409 "superblock": true, 00:13:15.409 "num_base_bdevs": 4, 00:13:15.409 "num_base_bdevs_discovered": 2, 00:13:15.409 "num_base_bdevs_operational": 2, 00:13:15.409 "base_bdevs_list": [ 00:13:15.409 { 00:13:15.409 "name": null, 00:13:15.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.409 "is_configured": false, 00:13:15.409 "data_offset": 0, 00:13:15.409 "data_size": 63488 00:13:15.409 }, 00:13:15.409 { 00:13:15.409 "name": null, 00:13:15.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.409 "is_configured": false, 00:13:15.409 "data_offset": 2048, 00:13:15.409 "data_size": 63488 00:13:15.409 }, 00:13:15.409 { 00:13:15.409 "name": "BaseBdev3", 00:13:15.409 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:15.409 "is_configured": true, 00:13:15.409 "data_offset": 2048, 00:13:15.409 "data_size": 63488 00:13:15.409 }, 00:13:15.409 { 00:13:15.409 "name": "BaseBdev4", 00:13:15.409 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:15.409 "is_configured": true, 00:13:15.409 "data_offset": 2048, 00:13:15.409 "data_size": 63488 00:13:15.409 } 00:13:15.409 ] 00:13:15.409 }' 00:13:15.409 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.409 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.668 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.668 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.668 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.668 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.668 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.668 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.668 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.668 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.668 23:08:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.668 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.668 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.668 "name": "raid_bdev1", 00:13:15.668 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:15.668 "strip_size_kb": 0, 00:13:15.668 "state": "online", 00:13:15.668 "raid_level": "raid1", 00:13:15.668 "superblock": true, 00:13:15.668 "num_base_bdevs": 4, 00:13:15.668 "num_base_bdevs_discovered": 2, 00:13:15.668 "num_base_bdevs_operational": 2, 00:13:15.668 "base_bdevs_list": [ 00:13:15.668 { 00:13:15.668 "name": null, 00:13:15.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.668 "is_configured": false, 00:13:15.668 "data_offset": 0, 00:13:15.668 "data_size": 63488 00:13:15.668 }, 00:13:15.668 { 00:13:15.668 "name": null, 00:13:15.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.668 "is_configured": false, 00:13:15.668 "data_offset": 2048, 00:13:15.668 "data_size": 63488 00:13:15.668 }, 00:13:15.668 { 00:13:15.668 "name": "BaseBdev3", 00:13:15.668 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:15.668 "is_configured": true, 00:13:15.668 "data_offset": 2048, 00:13:15.668 "data_size": 63488 00:13:15.668 }, 00:13:15.668 { 00:13:15.668 "name": "BaseBdev4", 00:13:15.668 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:15.668 "is_configured": true, 00:13:15.668 "data_offset": 2048, 00:13:15.668 "data_size": 63488 00:13:15.668 } 00:13:15.668 ] 00:13:15.668 }' 00:13:15.668 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.668 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.928 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.928 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.928 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:15.928 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.928 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.928 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.928 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.928 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.928 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.928 [2024-11-18 23:08:35.112686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.928 [2024-11-18 23:08:35.112740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.928 [2024-11-18 23:08:35.112762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:15.928 [2024-11-18 23:08:35.112772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.928 [2024-11-18 23:08:35.113155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.928 [2024-11-18 23:08:35.113174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.928 [2024-11-18 23:08:35.113240] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:15.928 [2024-11-18 23:08:35.113256] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:15.928 [2024-11-18 23:08:35.113274] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:15.928 [2024-11-18 23:08:35.113305] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:15.928 BaseBdev1 00:13:15.928 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.928 23:08:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.867 "name": "raid_bdev1", 00:13:16.867 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:16.867 "strip_size_kb": 0, 00:13:16.867 "state": "online", 00:13:16.867 "raid_level": "raid1", 00:13:16.867 "superblock": true, 00:13:16.867 "num_base_bdevs": 4, 00:13:16.867 "num_base_bdevs_discovered": 2, 00:13:16.867 "num_base_bdevs_operational": 2, 00:13:16.867 "base_bdevs_list": [ 00:13:16.867 { 00:13:16.867 "name": null, 00:13:16.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.867 "is_configured": false, 00:13:16.867 "data_offset": 0, 00:13:16.867 "data_size": 63488 00:13:16.867 }, 00:13:16.867 { 00:13:16.867 "name": null, 00:13:16.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.867 "is_configured": false, 00:13:16.867 "data_offset": 2048, 00:13:16.867 "data_size": 63488 00:13:16.867 }, 00:13:16.867 { 00:13:16.867 "name": "BaseBdev3", 00:13:16.867 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:16.867 "is_configured": true, 00:13:16.867 "data_offset": 2048, 00:13:16.867 "data_size": 63488 00:13:16.867 }, 00:13:16.867 { 00:13:16.867 "name": "BaseBdev4", 00:13:16.867 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:16.867 "is_configured": true, 00:13:16.867 "data_offset": 2048, 00:13:16.867 "data_size": 63488 00:13:16.867 } 00:13:16.867 ] 00:13:16.867 }' 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.867 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.437 "name": "raid_bdev1", 00:13:17.437 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:17.437 "strip_size_kb": 0, 00:13:17.437 "state": "online", 00:13:17.437 "raid_level": "raid1", 00:13:17.437 "superblock": true, 00:13:17.437 "num_base_bdevs": 4, 00:13:17.437 "num_base_bdevs_discovered": 2, 00:13:17.437 "num_base_bdevs_operational": 2, 00:13:17.437 "base_bdevs_list": [ 00:13:17.437 { 00:13:17.437 "name": null, 00:13:17.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.437 "is_configured": false, 00:13:17.437 "data_offset": 0, 00:13:17.437 "data_size": 63488 00:13:17.437 }, 00:13:17.437 { 00:13:17.437 "name": null, 00:13:17.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.437 "is_configured": false, 00:13:17.437 "data_offset": 2048, 00:13:17.437 "data_size": 63488 00:13:17.437 }, 00:13:17.437 { 00:13:17.437 "name": "BaseBdev3", 00:13:17.437 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:17.437 "is_configured": true, 00:13:17.437 "data_offset": 2048, 00:13:17.437 "data_size": 63488 00:13:17.437 }, 00:13:17.437 { 00:13:17.437 "name": "BaseBdev4", 00:13:17.437 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:17.437 "is_configured": true, 00:13:17.437 "data_offset": 2048, 00:13:17.437 "data_size": 63488 00:13:17.437 } 00:13:17.437 ] 00:13:17.437 }' 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.437 [2024-11-18 23:08:36.738487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.437 [2024-11-18 23:08:36.738614] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:17.437 [2024-11-18 23:08:36.738624] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:17.437 request: 00:13:17.437 { 00:13:17.437 "base_bdev": "BaseBdev1", 00:13:17.437 "raid_bdev": "raid_bdev1", 00:13:17.437 "method": "bdev_raid_add_base_bdev", 00:13:17.437 "req_id": 1 00:13:17.437 } 00:13:17.437 Got JSON-RPC error response 00:13:17.437 response: 00:13:17.437 { 00:13:17.437 "code": -22, 00:13:17.437 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:17.437 } 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:17.437 23:08:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:18.376 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.635 "name": "raid_bdev1", 00:13:18.635 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:18.635 "strip_size_kb": 0, 00:13:18.635 "state": "online", 00:13:18.635 "raid_level": "raid1", 00:13:18.635 "superblock": true, 00:13:18.635 "num_base_bdevs": 4, 00:13:18.635 "num_base_bdevs_discovered": 2, 00:13:18.635 "num_base_bdevs_operational": 2, 00:13:18.635 "base_bdevs_list": [ 00:13:18.635 { 00:13:18.635 "name": null, 00:13:18.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.635 "is_configured": false, 00:13:18.635 "data_offset": 0, 00:13:18.635 "data_size": 63488 00:13:18.635 }, 00:13:18.635 { 00:13:18.635 "name": null, 00:13:18.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.635 "is_configured": false, 00:13:18.635 "data_offset": 2048, 00:13:18.635 "data_size": 63488 00:13:18.635 }, 00:13:18.635 { 00:13:18.635 "name": "BaseBdev3", 00:13:18.635 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:18.635 "is_configured": true, 00:13:18.635 "data_offset": 2048, 00:13:18.635 "data_size": 63488 00:13:18.635 }, 00:13:18.635 { 00:13:18.635 "name": "BaseBdev4", 00:13:18.635 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:18.635 "is_configured": true, 00:13:18.635 "data_offset": 2048, 00:13:18.635 "data_size": 63488 00:13:18.635 } 00:13:18.635 ] 00:13:18.635 }' 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.635 23:08:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.895 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.895 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.895 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.895 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.895 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.895 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.895 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.895 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.895 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.895 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.155 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.155 "name": "raid_bdev1", 00:13:19.155 "uuid": "f9cf0148-c367-471f-8da5-cc1b5b6dbce9", 00:13:19.155 "strip_size_kb": 0, 00:13:19.155 "state": "online", 00:13:19.155 "raid_level": "raid1", 00:13:19.155 "superblock": true, 00:13:19.155 "num_base_bdevs": 4, 00:13:19.155 "num_base_bdevs_discovered": 2, 00:13:19.156 "num_base_bdevs_operational": 2, 00:13:19.156 "base_bdevs_list": [ 00:13:19.156 { 00:13:19.156 "name": null, 00:13:19.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.156 "is_configured": false, 00:13:19.156 "data_offset": 0, 00:13:19.156 "data_size": 63488 00:13:19.156 }, 00:13:19.156 { 00:13:19.156 "name": null, 00:13:19.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.156 "is_configured": false, 00:13:19.156 "data_offset": 2048, 00:13:19.156 "data_size": 63488 00:13:19.156 }, 00:13:19.156 { 00:13:19.156 "name": "BaseBdev3", 00:13:19.156 "uuid": "a26d882d-9897-556e-9da7-7947d39b802d", 00:13:19.156 "is_configured": true, 00:13:19.156 "data_offset": 2048, 00:13:19.156 "data_size": 63488 00:13:19.156 }, 00:13:19.156 { 00:13:19.156 "name": "BaseBdev4", 00:13:19.156 "uuid": "5e911a71-32fe-55e6-821f-744b26ab454f", 00:13:19.156 "is_configured": true, 00:13:19.156 "data_offset": 2048, 00:13:19.156 "data_size": 63488 00:13:19.156 } 00:13:19.156 ] 00:13:19.156 }' 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89696 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89696 ']' 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89696 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89696 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:19.156 killing process with pid 89696 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89696' 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89696 00:13:19.156 Received shutdown signal, test time was about 17.855355 seconds 00:13:19.156 00:13:19.156 Latency(us) 00:13:19.156 [2024-11-18T23:08:38.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.156 [2024-11-18T23:08:38.534Z] =================================================================================================================== 00:13:19.156 [2024-11-18T23:08:38.534Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:19.156 [2024-11-18 23:08:38.432366] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.156 [2024-11-18 23:08:38.432510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.156 [2024-11-18 23:08:38.432581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.156 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89696 00:13:19.156 [2024-11-18 23:08:38.432593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:19.156 [2024-11-18 23:08:38.476939] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.417 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:19.417 00:13:19.417 real 0m19.833s 00:13:19.417 user 0m26.408s 00:13:19.417 sys 0m2.718s 00:13:19.417 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.417 23:08:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.417 ************************************ 00:13:19.417 END TEST raid_rebuild_test_sb_io 00:13:19.417 ************************************ 00:13:19.417 23:08:38 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:19.417 23:08:38 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:19.417 23:08:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:19.417 23:08:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.417 23:08:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.678 ************************************ 00:13:19.678 START TEST raid5f_state_function_test 00:13:19.678 ************************************ 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90403 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:19.678 Process raid pid: 90403 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90403' 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90403 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90403 ']' 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.678 23:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.678 [2024-11-18 23:08:38.898512] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:19.678 [2024-11-18 23:08:38.898630] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.938 [2024-11-18 23:08:39.060945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.938 [2024-11-18 23:08:39.107854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.938 [2024-11-18 23:08:39.150586] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.938 [2024-11-18 23:08:39.150627] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.508 [2024-11-18 23:08:39.716030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:20.508 [2024-11-18 23:08:39.716071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:20.508 [2024-11-18 23:08:39.716085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:20.508 [2024-11-18 23:08:39.716095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:20.508 [2024-11-18 23:08:39.716100] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:20.508 [2024-11-18 23:08:39.716112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.508 "name": "Existed_Raid", 00:13:20.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.508 "strip_size_kb": 64, 00:13:20.508 "state": "configuring", 00:13:20.508 "raid_level": "raid5f", 00:13:20.508 "superblock": false, 00:13:20.508 "num_base_bdevs": 3, 00:13:20.508 "num_base_bdevs_discovered": 0, 00:13:20.508 "num_base_bdevs_operational": 3, 00:13:20.508 "base_bdevs_list": [ 00:13:20.508 { 00:13:20.508 "name": "BaseBdev1", 00:13:20.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.508 "is_configured": false, 00:13:20.508 "data_offset": 0, 00:13:20.508 "data_size": 0 00:13:20.508 }, 00:13:20.508 { 00:13:20.508 "name": "BaseBdev2", 00:13:20.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.508 "is_configured": false, 00:13:20.508 "data_offset": 0, 00:13:20.508 "data_size": 0 00:13:20.508 }, 00:13:20.508 { 00:13:20.508 "name": "BaseBdev3", 00:13:20.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.508 "is_configured": false, 00:13:20.508 "data_offset": 0, 00:13:20.508 "data_size": 0 00:13:20.508 } 00:13:20.508 ] 00:13:20.508 }' 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.508 23:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.079 [2024-11-18 23:08:40.167269] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.079 [2024-11-18 23:08:40.167330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.079 [2024-11-18 23:08:40.175308] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.079 [2024-11-18 23:08:40.175357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.079 [2024-11-18 23:08:40.175364] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.079 [2024-11-18 23:08:40.175373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.079 [2024-11-18 23:08:40.175379] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.079 [2024-11-18 23:08:40.175388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.079 [2024-11-18 23:08:40.196216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.079 BaseBdev1 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.079 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.079 [ 00:13:21.079 { 00:13:21.079 "name": "BaseBdev1", 00:13:21.079 "aliases": [ 00:13:21.079 "15469bb1-2794-4f5a-89b7-16db37829ec9" 00:13:21.079 ], 00:13:21.079 "product_name": "Malloc disk", 00:13:21.079 "block_size": 512, 00:13:21.079 "num_blocks": 65536, 00:13:21.079 "uuid": "15469bb1-2794-4f5a-89b7-16db37829ec9", 00:13:21.079 "assigned_rate_limits": { 00:13:21.079 "rw_ios_per_sec": 0, 00:13:21.079 "rw_mbytes_per_sec": 0, 00:13:21.079 "r_mbytes_per_sec": 0, 00:13:21.079 "w_mbytes_per_sec": 0 00:13:21.079 }, 00:13:21.079 "claimed": true, 00:13:21.079 "claim_type": "exclusive_write", 00:13:21.079 "zoned": false, 00:13:21.079 "supported_io_types": { 00:13:21.079 "read": true, 00:13:21.079 "write": true, 00:13:21.079 "unmap": true, 00:13:21.079 "flush": true, 00:13:21.079 "reset": true, 00:13:21.079 "nvme_admin": false, 00:13:21.079 "nvme_io": false, 00:13:21.079 "nvme_io_md": false, 00:13:21.079 "write_zeroes": true, 00:13:21.079 "zcopy": true, 00:13:21.079 "get_zone_info": false, 00:13:21.079 "zone_management": false, 00:13:21.079 "zone_append": false, 00:13:21.079 "compare": false, 00:13:21.079 "compare_and_write": false, 00:13:21.079 "abort": true, 00:13:21.079 "seek_hole": false, 00:13:21.079 "seek_data": false, 00:13:21.079 "copy": true, 00:13:21.079 "nvme_iov_md": false 00:13:21.080 }, 00:13:21.080 "memory_domains": [ 00:13:21.080 { 00:13:21.080 "dma_device_id": "system", 00:13:21.080 "dma_device_type": 1 00:13:21.080 }, 00:13:21.080 { 00:13:21.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.080 "dma_device_type": 2 00:13:21.080 } 00:13:21.080 ], 00:13:21.080 "driver_specific": {} 00:13:21.080 } 00:13:21.080 ] 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.080 "name": "Existed_Raid", 00:13:21.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.080 "strip_size_kb": 64, 00:13:21.080 "state": "configuring", 00:13:21.080 "raid_level": "raid5f", 00:13:21.080 "superblock": false, 00:13:21.080 "num_base_bdevs": 3, 00:13:21.080 "num_base_bdevs_discovered": 1, 00:13:21.080 "num_base_bdevs_operational": 3, 00:13:21.080 "base_bdevs_list": [ 00:13:21.080 { 00:13:21.080 "name": "BaseBdev1", 00:13:21.080 "uuid": "15469bb1-2794-4f5a-89b7-16db37829ec9", 00:13:21.080 "is_configured": true, 00:13:21.080 "data_offset": 0, 00:13:21.080 "data_size": 65536 00:13:21.080 }, 00:13:21.080 { 00:13:21.080 "name": "BaseBdev2", 00:13:21.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.080 "is_configured": false, 00:13:21.080 "data_offset": 0, 00:13:21.080 "data_size": 0 00:13:21.080 }, 00:13:21.080 { 00:13:21.080 "name": "BaseBdev3", 00:13:21.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.080 "is_configured": false, 00:13:21.080 "data_offset": 0, 00:13:21.080 "data_size": 0 00:13:21.080 } 00:13:21.080 ] 00:13:21.080 }' 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.080 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.341 [2024-11-18 23:08:40.675393] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.341 [2024-11-18 23:08:40.675434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.341 [2024-11-18 23:08:40.687422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.341 [2024-11-18 23:08:40.689255] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.341 [2024-11-18 23:08:40.689301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.341 [2024-11-18 23:08:40.689310] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.341 [2024-11-18 23:08:40.689319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.341 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.601 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.601 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.601 "name": "Existed_Raid", 00:13:21.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.601 "strip_size_kb": 64, 00:13:21.601 "state": "configuring", 00:13:21.601 "raid_level": "raid5f", 00:13:21.601 "superblock": false, 00:13:21.601 "num_base_bdevs": 3, 00:13:21.601 "num_base_bdevs_discovered": 1, 00:13:21.601 "num_base_bdevs_operational": 3, 00:13:21.601 "base_bdevs_list": [ 00:13:21.601 { 00:13:21.601 "name": "BaseBdev1", 00:13:21.601 "uuid": "15469bb1-2794-4f5a-89b7-16db37829ec9", 00:13:21.601 "is_configured": true, 00:13:21.601 "data_offset": 0, 00:13:21.601 "data_size": 65536 00:13:21.601 }, 00:13:21.601 { 00:13:21.601 "name": "BaseBdev2", 00:13:21.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.601 "is_configured": false, 00:13:21.601 "data_offset": 0, 00:13:21.601 "data_size": 0 00:13:21.601 }, 00:13:21.601 { 00:13:21.601 "name": "BaseBdev3", 00:13:21.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.601 "is_configured": false, 00:13:21.601 "data_offset": 0, 00:13:21.601 "data_size": 0 00:13:21.601 } 00:13:21.601 ] 00:13:21.601 }' 00:13:21.601 23:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.601 23:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.861 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:21.861 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.861 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.861 [2024-11-18 23:08:41.171001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.861 BaseBdev2 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.862 [ 00:13:21.862 { 00:13:21.862 "name": "BaseBdev2", 00:13:21.862 "aliases": [ 00:13:21.862 "e2e82ab6-d322-4f1e-ac31-354008c7935c" 00:13:21.862 ], 00:13:21.862 "product_name": "Malloc disk", 00:13:21.862 "block_size": 512, 00:13:21.862 "num_blocks": 65536, 00:13:21.862 "uuid": "e2e82ab6-d322-4f1e-ac31-354008c7935c", 00:13:21.862 "assigned_rate_limits": { 00:13:21.862 "rw_ios_per_sec": 0, 00:13:21.862 "rw_mbytes_per_sec": 0, 00:13:21.862 "r_mbytes_per_sec": 0, 00:13:21.862 "w_mbytes_per_sec": 0 00:13:21.862 }, 00:13:21.862 "claimed": true, 00:13:21.862 "claim_type": "exclusive_write", 00:13:21.862 "zoned": false, 00:13:21.862 "supported_io_types": { 00:13:21.862 "read": true, 00:13:21.862 "write": true, 00:13:21.862 "unmap": true, 00:13:21.862 "flush": true, 00:13:21.862 "reset": true, 00:13:21.862 "nvme_admin": false, 00:13:21.862 "nvme_io": false, 00:13:21.862 "nvme_io_md": false, 00:13:21.862 "write_zeroes": true, 00:13:21.862 "zcopy": true, 00:13:21.862 "get_zone_info": false, 00:13:21.862 "zone_management": false, 00:13:21.862 "zone_append": false, 00:13:21.862 "compare": false, 00:13:21.862 "compare_and_write": false, 00:13:21.862 "abort": true, 00:13:21.862 "seek_hole": false, 00:13:21.862 "seek_data": false, 00:13:21.862 "copy": true, 00:13:21.862 "nvme_iov_md": false 00:13:21.862 }, 00:13:21.862 "memory_domains": [ 00:13:21.862 { 00:13:21.862 "dma_device_id": "system", 00:13:21.862 "dma_device_type": 1 00:13:21.862 }, 00:13:21.862 { 00:13:21.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.862 "dma_device_type": 2 00:13:21.862 } 00:13:21.862 ], 00:13:21.862 "driver_specific": {} 00:13:21.862 } 00:13:21.862 ] 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.862 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.122 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.122 "name": "Existed_Raid", 00:13:22.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.122 "strip_size_kb": 64, 00:13:22.122 "state": "configuring", 00:13:22.122 "raid_level": "raid5f", 00:13:22.122 "superblock": false, 00:13:22.122 "num_base_bdevs": 3, 00:13:22.123 "num_base_bdevs_discovered": 2, 00:13:22.123 "num_base_bdevs_operational": 3, 00:13:22.123 "base_bdevs_list": [ 00:13:22.123 { 00:13:22.123 "name": "BaseBdev1", 00:13:22.123 "uuid": "15469bb1-2794-4f5a-89b7-16db37829ec9", 00:13:22.123 "is_configured": true, 00:13:22.123 "data_offset": 0, 00:13:22.123 "data_size": 65536 00:13:22.123 }, 00:13:22.123 { 00:13:22.123 "name": "BaseBdev2", 00:13:22.123 "uuid": "e2e82ab6-d322-4f1e-ac31-354008c7935c", 00:13:22.123 "is_configured": true, 00:13:22.123 "data_offset": 0, 00:13:22.123 "data_size": 65536 00:13:22.123 }, 00:13:22.123 { 00:13:22.123 "name": "BaseBdev3", 00:13:22.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.123 "is_configured": false, 00:13:22.123 "data_offset": 0, 00:13:22.123 "data_size": 0 00:13:22.123 } 00:13:22.123 ] 00:13:22.123 }' 00:13:22.123 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.123 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.383 [2024-11-18 23:08:41.669181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:22.383 [2024-11-18 23:08:41.669319] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:22.383 [2024-11-18 23:08:41.669350] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:22.383 [2024-11-18 23:08:41.669682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:22.383 [2024-11-18 23:08:41.670155] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:22.383 [2024-11-18 23:08:41.670204] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev3 00:13:22.383 id_bdev 0x617000006980 00:13:22.383 [2024-11-18 23:08:41.670439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.383 [ 00:13:22.383 { 00:13:22.383 "name": "BaseBdev3", 00:13:22.383 "aliases": [ 00:13:22.383 "6d9d9ef3-8a59-445b-9297-920585c1edfa" 00:13:22.383 ], 00:13:22.383 "product_name": "Malloc disk", 00:13:22.383 "block_size": 512, 00:13:22.383 "num_blocks": 65536, 00:13:22.383 "uuid": "6d9d9ef3-8a59-445b-9297-920585c1edfa", 00:13:22.383 "assigned_rate_limits": { 00:13:22.383 "rw_ios_per_sec": 0, 00:13:22.383 "rw_mbytes_per_sec": 0, 00:13:22.383 "r_mbytes_per_sec": 0, 00:13:22.383 "w_mbytes_per_sec": 0 00:13:22.383 }, 00:13:22.383 "claimed": true, 00:13:22.383 "claim_type": "exclusive_write", 00:13:22.383 "zoned": false, 00:13:22.383 "supported_io_types": { 00:13:22.383 "read": true, 00:13:22.383 "write": true, 00:13:22.383 "unmap": true, 00:13:22.383 "flush": true, 00:13:22.383 "reset": true, 00:13:22.383 "nvme_admin": false, 00:13:22.383 "nvme_io": false, 00:13:22.383 "nvme_io_md": false, 00:13:22.383 "write_zeroes": true, 00:13:22.383 "zcopy": true, 00:13:22.383 "get_zone_info": false, 00:13:22.383 "zone_management": false, 00:13:22.383 "zone_append": false, 00:13:22.383 "compare": false, 00:13:22.383 "compare_and_write": false, 00:13:22.383 "abort": true, 00:13:22.383 "seek_hole": false, 00:13:22.383 "seek_data": false, 00:13:22.383 "copy": true, 00:13:22.383 "nvme_iov_md": false 00:13:22.383 }, 00:13:22.383 "memory_domains": [ 00:13:22.383 { 00:13:22.383 "dma_device_id": "system", 00:13:22.383 "dma_device_type": 1 00:13:22.383 }, 00:13:22.383 { 00:13:22.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.383 "dma_device_type": 2 00:13:22.383 } 00:13:22.383 ], 00:13:22.383 "driver_specific": {} 00:13:22.383 } 00:13:22.383 ] 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.383 "name": "Existed_Raid", 00:13:22.383 "uuid": "ce10b303-4b77-4f36-8b61-33d23b3932ae", 00:13:22.383 "strip_size_kb": 64, 00:13:22.383 "state": "online", 00:13:22.383 "raid_level": "raid5f", 00:13:22.383 "superblock": false, 00:13:22.383 "num_base_bdevs": 3, 00:13:22.383 "num_base_bdevs_discovered": 3, 00:13:22.383 "num_base_bdevs_operational": 3, 00:13:22.383 "base_bdevs_list": [ 00:13:22.383 { 00:13:22.383 "name": "BaseBdev1", 00:13:22.383 "uuid": "15469bb1-2794-4f5a-89b7-16db37829ec9", 00:13:22.383 "is_configured": true, 00:13:22.383 "data_offset": 0, 00:13:22.383 "data_size": 65536 00:13:22.383 }, 00:13:22.383 { 00:13:22.383 "name": "BaseBdev2", 00:13:22.383 "uuid": "e2e82ab6-d322-4f1e-ac31-354008c7935c", 00:13:22.383 "is_configured": true, 00:13:22.383 "data_offset": 0, 00:13:22.383 "data_size": 65536 00:13:22.383 }, 00:13:22.383 { 00:13:22.383 "name": "BaseBdev3", 00:13:22.383 "uuid": "6d9d9ef3-8a59-445b-9297-920585c1edfa", 00:13:22.383 "is_configured": true, 00:13:22.383 "data_offset": 0, 00:13:22.383 "data_size": 65536 00:13:22.383 } 00:13:22.383 ] 00:13:22.383 }' 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.383 23:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:22.953 [2024-11-18 23:08:42.176553] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.953 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:22.953 "name": "Existed_Raid", 00:13:22.953 "aliases": [ 00:13:22.953 "ce10b303-4b77-4f36-8b61-33d23b3932ae" 00:13:22.953 ], 00:13:22.953 "product_name": "Raid Volume", 00:13:22.953 "block_size": 512, 00:13:22.953 "num_blocks": 131072, 00:13:22.953 "uuid": "ce10b303-4b77-4f36-8b61-33d23b3932ae", 00:13:22.953 "assigned_rate_limits": { 00:13:22.954 "rw_ios_per_sec": 0, 00:13:22.954 "rw_mbytes_per_sec": 0, 00:13:22.954 "r_mbytes_per_sec": 0, 00:13:22.954 "w_mbytes_per_sec": 0 00:13:22.954 }, 00:13:22.954 "claimed": false, 00:13:22.954 "zoned": false, 00:13:22.954 "supported_io_types": { 00:13:22.954 "read": true, 00:13:22.954 "write": true, 00:13:22.954 "unmap": false, 00:13:22.954 "flush": false, 00:13:22.954 "reset": true, 00:13:22.954 "nvme_admin": false, 00:13:22.954 "nvme_io": false, 00:13:22.954 "nvme_io_md": false, 00:13:22.954 "write_zeroes": true, 00:13:22.954 "zcopy": false, 00:13:22.954 "get_zone_info": false, 00:13:22.954 "zone_management": false, 00:13:22.954 "zone_append": false, 00:13:22.954 "compare": false, 00:13:22.954 "compare_and_write": false, 00:13:22.954 "abort": false, 00:13:22.954 "seek_hole": false, 00:13:22.954 "seek_data": false, 00:13:22.954 "copy": false, 00:13:22.954 "nvme_iov_md": false 00:13:22.954 }, 00:13:22.954 "driver_specific": { 00:13:22.954 "raid": { 00:13:22.954 "uuid": "ce10b303-4b77-4f36-8b61-33d23b3932ae", 00:13:22.954 "strip_size_kb": 64, 00:13:22.954 "state": "online", 00:13:22.954 "raid_level": "raid5f", 00:13:22.954 "superblock": false, 00:13:22.954 "num_base_bdevs": 3, 00:13:22.954 "num_base_bdevs_discovered": 3, 00:13:22.954 "num_base_bdevs_operational": 3, 00:13:22.954 "base_bdevs_list": [ 00:13:22.954 { 00:13:22.954 "name": "BaseBdev1", 00:13:22.954 "uuid": "15469bb1-2794-4f5a-89b7-16db37829ec9", 00:13:22.954 "is_configured": true, 00:13:22.954 "data_offset": 0, 00:13:22.954 "data_size": 65536 00:13:22.954 }, 00:13:22.954 { 00:13:22.954 "name": "BaseBdev2", 00:13:22.954 "uuid": "e2e82ab6-d322-4f1e-ac31-354008c7935c", 00:13:22.954 "is_configured": true, 00:13:22.954 "data_offset": 0, 00:13:22.954 "data_size": 65536 00:13:22.954 }, 00:13:22.954 { 00:13:22.954 "name": "BaseBdev3", 00:13:22.954 "uuid": "6d9d9ef3-8a59-445b-9297-920585c1edfa", 00:13:22.954 "is_configured": true, 00:13:22.954 "data_offset": 0, 00:13:22.954 "data_size": 65536 00:13:22.954 } 00:13:22.954 ] 00:13:22.954 } 00:13:22.954 } 00:13:22.954 }' 00:13:22.954 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:22.954 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:22.954 BaseBdev2 00:13:22.954 BaseBdev3' 00:13:22.954 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.954 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:22.954 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.954 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.954 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:22.954 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.954 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.214 [2024-11-18 23:08:42.443950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.214 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.215 "name": "Existed_Raid", 00:13:23.215 "uuid": "ce10b303-4b77-4f36-8b61-33d23b3932ae", 00:13:23.215 "strip_size_kb": 64, 00:13:23.215 "state": "online", 00:13:23.215 "raid_level": "raid5f", 00:13:23.215 "superblock": false, 00:13:23.215 "num_base_bdevs": 3, 00:13:23.215 "num_base_bdevs_discovered": 2, 00:13:23.215 "num_base_bdevs_operational": 2, 00:13:23.215 "base_bdevs_list": [ 00:13:23.215 { 00:13:23.215 "name": null, 00:13:23.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.215 "is_configured": false, 00:13:23.215 "data_offset": 0, 00:13:23.215 "data_size": 65536 00:13:23.215 }, 00:13:23.215 { 00:13:23.215 "name": "BaseBdev2", 00:13:23.215 "uuid": "e2e82ab6-d322-4f1e-ac31-354008c7935c", 00:13:23.215 "is_configured": true, 00:13:23.215 "data_offset": 0, 00:13:23.215 "data_size": 65536 00:13:23.215 }, 00:13:23.215 { 00:13:23.215 "name": "BaseBdev3", 00:13:23.215 "uuid": "6d9d9ef3-8a59-445b-9297-920585c1edfa", 00:13:23.215 "is_configured": true, 00:13:23.215 "data_offset": 0, 00:13:23.215 "data_size": 65536 00:13:23.215 } 00:13:23.215 ] 00:13:23.215 }' 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.215 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.786 [2024-11-18 23:08:42.910486] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:23.786 [2024-11-18 23:08:42.910627] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.786 [2024-11-18 23:08:42.921543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.786 [2024-11-18 23:08:42.981480] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:23.786 [2024-11-18 23:08:42.981568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.786 23:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.786 BaseBdev2 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.786 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.786 [ 00:13:23.786 { 00:13:23.786 "name": "BaseBdev2", 00:13:23.786 "aliases": [ 00:13:23.786 "082e1ed4-6069-4645-8062-98e46279dbfa" 00:13:23.786 ], 00:13:23.786 "product_name": "Malloc disk", 00:13:23.786 "block_size": 512, 00:13:23.786 "num_blocks": 65536, 00:13:23.786 "uuid": "082e1ed4-6069-4645-8062-98e46279dbfa", 00:13:23.786 "assigned_rate_limits": { 00:13:23.786 "rw_ios_per_sec": 0, 00:13:23.787 "rw_mbytes_per_sec": 0, 00:13:23.787 "r_mbytes_per_sec": 0, 00:13:23.787 "w_mbytes_per_sec": 0 00:13:23.787 }, 00:13:23.787 "claimed": false, 00:13:23.787 "zoned": false, 00:13:23.787 "supported_io_types": { 00:13:23.787 "read": true, 00:13:23.787 "write": true, 00:13:23.787 "unmap": true, 00:13:23.787 "flush": true, 00:13:23.787 "reset": true, 00:13:23.787 "nvme_admin": false, 00:13:23.787 "nvme_io": false, 00:13:23.787 "nvme_io_md": false, 00:13:23.787 "write_zeroes": true, 00:13:23.787 "zcopy": true, 00:13:23.787 "get_zone_info": false, 00:13:23.787 "zone_management": false, 00:13:23.787 "zone_append": false, 00:13:23.787 "compare": false, 00:13:23.787 "compare_and_write": false, 00:13:23.787 "abort": true, 00:13:23.787 "seek_hole": false, 00:13:23.787 "seek_data": false, 00:13:23.787 "copy": true, 00:13:23.787 "nvme_iov_md": false 00:13:23.787 }, 00:13:23.787 "memory_domains": [ 00:13:23.787 { 00:13:23.787 "dma_device_id": "system", 00:13:23.787 "dma_device_type": 1 00:13:23.787 }, 00:13:23.787 { 00:13:23.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.787 "dma_device_type": 2 00:13:23.787 } 00:13:23.787 ], 00:13:23.787 "driver_specific": {} 00:13:23.787 } 00:13:23.787 ] 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.787 BaseBdev3 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.787 [ 00:13:23.787 { 00:13:23.787 "name": "BaseBdev3", 00:13:23.787 "aliases": [ 00:13:23.787 "b561e095-d8cb-4cd5-9f42-e256338ad7f8" 00:13:23.787 ], 00:13:23.787 "product_name": "Malloc disk", 00:13:23.787 "block_size": 512, 00:13:23.787 "num_blocks": 65536, 00:13:23.787 "uuid": "b561e095-d8cb-4cd5-9f42-e256338ad7f8", 00:13:23.787 "assigned_rate_limits": { 00:13:23.787 "rw_ios_per_sec": 0, 00:13:23.787 "rw_mbytes_per_sec": 0, 00:13:23.787 "r_mbytes_per_sec": 0, 00:13:23.787 "w_mbytes_per_sec": 0 00:13:23.787 }, 00:13:23.787 "claimed": false, 00:13:23.787 "zoned": false, 00:13:23.787 "supported_io_types": { 00:13:23.787 "read": true, 00:13:23.787 "write": true, 00:13:23.787 "unmap": true, 00:13:23.787 "flush": true, 00:13:23.787 "reset": true, 00:13:23.787 "nvme_admin": false, 00:13:23.787 "nvme_io": false, 00:13:23.787 "nvme_io_md": false, 00:13:23.787 "write_zeroes": true, 00:13:23.787 "zcopy": true, 00:13:23.787 "get_zone_info": false, 00:13:23.787 "zone_management": false, 00:13:23.787 "zone_append": false, 00:13:23.787 "compare": false, 00:13:23.787 "compare_and_write": false, 00:13:23.787 "abort": true, 00:13:23.787 "seek_hole": false, 00:13:23.787 "seek_data": false, 00:13:23.787 "copy": true, 00:13:23.787 "nvme_iov_md": false 00:13:23.787 }, 00:13:23.787 "memory_domains": [ 00:13:23.787 { 00:13:23.787 "dma_device_id": "system", 00:13:23.787 "dma_device_type": 1 00:13:23.787 }, 00:13:23.787 { 00:13:23.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.787 "dma_device_type": 2 00:13:23.787 } 00:13:23.787 ], 00:13:23.787 "driver_specific": {} 00:13:23.787 } 00:13:23.787 ] 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.787 [2024-11-18 23:08:43.155819] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.787 [2024-11-18 23:08:43.155922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.787 [2024-11-18 23:08:43.155964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.787 [2024-11-18 23:08:43.157802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.787 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.049 "name": "Existed_Raid", 00:13:24.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.049 "strip_size_kb": 64, 00:13:24.049 "state": "configuring", 00:13:24.049 "raid_level": "raid5f", 00:13:24.049 "superblock": false, 00:13:24.049 "num_base_bdevs": 3, 00:13:24.049 "num_base_bdevs_discovered": 2, 00:13:24.049 "num_base_bdevs_operational": 3, 00:13:24.049 "base_bdevs_list": [ 00:13:24.049 { 00:13:24.049 "name": "BaseBdev1", 00:13:24.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.049 "is_configured": false, 00:13:24.049 "data_offset": 0, 00:13:24.049 "data_size": 0 00:13:24.049 }, 00:13:24.049 { 00:13:24.049 "name": "BaseBdev2", 00:13:24.049 "uuid": "082e1ed4-6069-4645-8062-98e46279dbfa", 00:13:24.049 "is_configured": true, 00:13:24.049 "data_offset": 0, 00:13:24.049 "data_size": 65536 00:13:24.049 }, 00:13:24.049 { 00:13:24.049 "name": "BaseBdev3", 00:13:24.049 "uuid": "b561e095-d8cb-4cd5-9f42-e256338ad7f8", 00:13:24.049 "is_configured": true, 00:13:24.049 "data_offset": 0, 00:13:24.049 "data_size": 65536 00:13:24.049 } 00:13:24.049 ] 00:13:24.049 }' 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.049 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.309 [2024-11-18 23:08:43.615099] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.309 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.309 "name": "Existed_Raid", 00:13:24.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.309 "strip_size_kb": 64, 00:13:24.309 "state": "configuring", 00:13:24.309 "raid_level": "raid5f", 00:13:24.309 "superblock": false, 00:13:24.309 "num_base_bdevs": 3, 00:13:24.309 "num_base_bdevs_discovered": 1, 00:13:24.309 "num_base_bdevs_operational": 3, 00:13:24.309 "base_bdevs_list": [ 00:13:24.309 { 00:13:24.309 "name": "BaseBdev1", 00:13:24.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.309 "is_configured": false, 00:13:24.309 "data_offset": 0, 00:13:24.309 "data_size": 0 00:13:24.309 }, 00:13:24.309 { 00:13:24.309 "name": null, 00:13:24.309 "uuid": "082e1ed4-6069-4645-8062-98e46279dbfa", 00:13:24.309 "is_configured": false, 00:13:24.309 "data_offset": 0, 00:13:24.309 "data_size": 65536 00:13:24.310 }, 00:13:24.310 { 00:13:24.310 "name": "BaseBdev3", 00:13:24.310 "uuid": "b561e095-d8cb-4cd5-9f42-e256338ad7f8", 00:13:24.310 "is_configured": true, 00:13:24.310 "data_offset": 0, 00:13:24.310 "data_size": 65536 00:13:24.310 } 00:13:24.310 ] 00:13:24.310 }' 00:13:24.310 23:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.310 23:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 [2024-11-18 23:08:44.143441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.891 BaseBdev1 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 [ 00:13:24.891 { 00:13:24.891 "name": "BaseBdev1", 00:13:24.891 "aliases": [ 00:13:24.891 "d4a170fa-f34d-4e4a-ab66-35216293484e" 00:13:24.891 ], 00:13:24.891 "product_name": "Malloc disk", 00:13:24.891 "block_size": 512, 00:13:24.891 "num_blocks": 65536, 00:13:24.891 "uuid": "d4a170fa-f34d-4e4a-ab66-35216293484e", 00:13:24.891 "assigned_rate_limits": { 00:13:24.891 "rw_ios_per_sec": 0, 00:13:24.891 "rw_mbytes_per_sec": 0, 00:13:24.891 "r_mbytes_per_sec": 0, 00:13:24.891 "w_mbytes_per_sec": 0 00:13:24.891 }, 00:13:24.891 "claimed": true, 00:13:24.891 "claim_type": "exclusive_write", 00:13:24.891 "zoned": false, 00:13:24.891 "supported_io_types": { 00:13:24.891 "read": true, 00:13:24.891 "write": true, 00:13:24.891 "unmap": true, 00:13:24.891 "flush": true, 00:13:24.891 "reset": true, 00:13:24.891 "nvme_admin": false, 00:13:24.891 "nvme_io": false, 00:13:24.891 "nvme_io_md": false, 00:13:24.891 "write_zeroes": true, 00:13:24.891 "zcopy": true, 00:13:24.891 "get_zone_info": false, 00:13:24.891 "zone_management": false, 00:13:24.891 "zone_append": false, 00:13:24.891 "compare": false, 00:13:24.891 "compare_and_write": false, 00:13:24.891 "abort": true, 00:13:24.891 "seek_hole": false, 00:13:24.891 "seek_data": false, 00:13:24.891 "copy": true, 00:13:24.891 "nvme_iov_md": false 00:13:24.891 }, 00:13:24.891 "memory_domains": [ 00:13:24.891 { 00:13:24.891 "dma_device_id": "system", 00:13:24.891 "dma_device_type": 1 00:13:24.891 }, 00:13:24.891 { 00:13:24.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.891 "dma_device_type": 2 00:13:24.891 } 00:13:24.891 ], 00:13:24.891 "driver_specific": {} 00:13:24.891 } 00:13:24.891 ] 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.891 "name": "Existed_Raid", 00:13:24.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.891 "strip_size_kb": 64, 00:13:24.891 "state": "configuring", 00:13:24.891 "raid_level": "raid5f", 00:13:24.891 "superblock": false, 00:13:24.891 "num_base_bdevs": 3, 00:13:24.891 "num_base_bdevs_discovered": 2, 00:13:24.891 "num_base_bdevs_operational": 3, 00:13:24.891 "base_bdevs_list": [ 00:13:24.891 { 00:13:24.891 "name": "BaseBdev1", 00:13:24.891 "uuid": "d4a170fa-f34d-4e4a-ab66-35216293484e", 00:13:24.891 "is_configured": true, 00:13:24.891 "data_offset": 0, 00:13:24.891 "data_size": 65536 00:13:24.891 }, 00:13:24.891 { 00:13:24.891 "name": null, 00:13:24.891 "uuid": "082e1ed4-6069-4645-8062-98e46279dbfa", 00:13:24.891 "is_configured": false, 00:13:24.891 "data_offset": 0, 00:13:24.891 "data_size": 65536 00:13:24.891 }, 00:13:24.891 { 00:13:24.891 "name": "BaseBdev3", 00:13:24.891 "uuid": "b561e095-d8cb-4cd5-9f42-e256338ad7f8", 00:13:24.891 "is_configured": true, 00:13:24.891 "data_offset": 0, 00:13:24.891 "data_size": 65536 00:13:24.891 } 00:13:24.891 ] 00:13:24.891 }' 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.891 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.474 [2024-11-18 23:08:44.727387] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.474 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.475 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.475 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.475 "name": "Existed_Raid", 00:13:25.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.475 "strip_size_kb": 64, 00:13:25.475 "state": "configuring", 00:13:25.475 "raid_level": "raid5f", 00:13:25.475 "superblock": false, 00:13:25.475 "num_base_bdevs": 3, 00:13:25.475 "num_base_bdevs_discovered": 1, 00:13:25.475 "num_base_bdevs_operational": 3, 00:13:25.475 "base_bdevs_list": [ 00:13:25.475 { 00:13:25.475 "name": "BaseBdev1", 00:13:25.475 "uuid": "d4a170fa-f34d-4e4a-ab66-35216293484e", 00:13:25.475 "is_configured": true, 00:13:25.475 "data_offset": 0, 00:13:25.475 "data_size": 65536 00:13:25.475 }, 00:13:25.475 { 00:13:25.475 "name": null, 00:13:25.475 "uuid": "082e1ed4-6069-4645-8062-98e46279dbfa", 00:13:25.475 "is_configured": false, 00:13:25.475 "data_offset": 0, 00:13:25.475 "data_size": 65536 00:13:25.475 }, 00:13:25.475 { 00:13:25.475 "name": null, 00:13:25.475 "uuid": "b561e095-d8cb-4cd5-9f42-e256338ad7f8", 00:13:25.475 "is_configured": false, 00:13:25.475 "data_offset": 0, 00:13:25.475 "data_size": 65536 00:13:25.475 } 00:13:25.475 ] 00:13:25.475 }' 00:13:25.475 23:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.475 23:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.049 [2024-11-18 23:08:45.235420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.049 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.050 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.050 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.050 "name": "Existed_Raid", 00:13:26.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.050 "strip_size_kb": 64, 00:13:26.050 "state": "configuring", 00:13:26.050 "raid_level": "raid5f", 00:13:26.050 "superblock": false, 00:13:26.050 "num_base_bdevs": 3, 00:13:26.050 "num_base_bdevs_discovered": 2, 00:13:26.050 "num_base_bdevs_operational": 3, 00:13:26.050 "base_bdevs_list": [ 00:13:26.050 { 00:13:26.050 "name": "BaseBdev1", 00:13:26.050 "uuid": "d4a170fa-f34d-4e4a-ab66-35216293484e", 00:13:26.050 "is_configured": true, 00:13:26.050 "data_offset": 0, 00:13:26.050 "data_size": 65536 00:13:26.050 }, 00:13:26.050 { 00:13:26.050 "name": null, 00:13:26.050 "uuid": "082e1ed4-6069-4645-8062-98e46279dbfa", 00:13:26.050 "is_configured": false, 00:13:26.050 "data_offset": 0, 00:13:26.050 "data_size": 65536 00:13:26.050 }, 00:13:26.050 { 00:13:26.050 "name": "BaseBdev3", 00:13:26.050 "uuid": "b561e095-d8cb-4cd5-9f42-e256338ad7f8", 00:13:26.050 "is_configured": true, 00:13:26.050 "data_offset": 0, 00:13:26.050 "data_size": 65536 00:13:26.050 } 00:13:26.050 ] 00:13:26.050 }' 00:13:26.050 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.050 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.620 [2024-11-18 23:08:45.771419] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.620 "name": "Existed_Raid", 00:13:26.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.620 "strip_size_kb": 64, 00:13:26.620 "state": "configuring", 00:13:26.620 "raid_level": "raid5f", 00:13:26.620 "superblock": false, 00:13:26.620 "num_base_bdevs": 3, 00:13:26.620 "num_base_bdevs_discovered": 1, 00:13:26.620 "num_base_bdevs_operational": 3, 00:13:26.620 "base_bdevs_list": [ 00:13:26.620 { 00:13:26.620 "name": null, 00:13:26.620 "uuid": "d4a170fa-f34d-4e4a-ab66-35216293484e", 00:13:26.620 "is_configured": false, 00:13:26.620 "data_offset": 0, 00:13:26.620 "data_size": 65536 00:13:26.620 }, 00:13:26.620 { 00:13:26.620 "name": null, 00:13:26.620 "uuid": "082e1ed4-6069-4645-8062-98e46279dbfa", 00:13:26.620 "is_configured": false, 00:13:26.620 "data_offset": 0, 00:13:26.620 "data_size": 65536 00:13:26.620 }, 00:13:26.620 { 00:13:26.620 "name": "BaseBdev3", 00:13:26.620 "uuid": "b561e095-d8cb-4cd5-9f42-e256338ad7f8", 00:13:26.620 "is_configured": true, 00:13:26.620 "data_offset": 0, 00:13:26.620 "data_size": 65536 00:13:26.620 } 00:13:26.620 ] 00:13:26.620 }' 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.620 23:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.878 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.878 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:26.878 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.878 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.138 [2024-11-18 23:08:46.296920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.138 "name": "Existed_Raid", 00:13:27.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.138 "strip_size_kb": 64, 00:13:27.138 "state": "configuring", 00:13:27.138 "raid_level": "raid5f", 00:13:27.138 "superblock": false, 00:13:27.138 "num_base_bdevs": 3, 00:13:27.138 "num_base_bdevs_discovered": 2, 00:13:27.138 "num_base_bdevs_operational": 3, 00:13:27.138 "base_bdevs_list": [ 00:13:27.138 { 00:13:27.138 "name": null, 00:13:27.138 "uuid": "d4a170fa-f34d-4e4a-ab66-35216293484e", 00:13:27.138 "is_configured": false, 00:13:27.138 "data_offset": 0, 00:13:27.138 "data_size": 65536 00:13:27.138 }, 00:13:27.138 { 00:13:27.138 "name": "BaseBdev2", 00:13:27.138 "uuid": "082e1ed4-6069-4645-8062-98e46279dbfa", 00:13:27.138 "is_configured": true, 00:13:27.138 "data_offset": 0, 00:13:27.138 "data_size": 65536 00:13:27.138 }, 00:13:27.138 { 00:13:27.138 "name": "BaseBdev3", 00:13:27.138 "uuid": "b561e095-d8cb-4cd5-9f42-e256338ad7f8", 00:13:27.138 "is_configured": true, 00:13:27.138 "data_offset": 0, 00:13:27.138 "data_size": 65536 00:13:27.138 } 00:13:27.138 ] 00:13:27.138 }' 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.138 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.398 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.398 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.398 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.398 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d4a170fa-f34d-4e4a-ab66-35216293484e 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.659 [2024-11-18 23:08:46.870352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:27.659 [2024-11-18 23:08:46.870393] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:27.659 [2024-11-18 23:08:46.870402] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:27.659 [2024-11-18 23:08:46.870684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:27.659 [2024-11-18 23:08:46.871090] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:27.659 [2024-11-18 23:08:46.871109] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:27.659 [2024-11-18 23:08:46.871262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.659 NewBaseBdev 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.659 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.659 [ 00:13:27.659 { 00:13:27.659 "name": "NewBaseBdev", 00:13:27.659 "aliases": [ 00:13:27.659 "d4a170fa-f34d-4e4a-ab66-35216293484e" 00:13:27.659 ], 00:13:27.659 "product_name": "Malloc disk", 00:13:27.659 "block_size": 512, 00:13:27.659 "num_blocks": 65536, 00:13:27.659 "uuid": "d4a170fa-f34d-4e4a-ab66-35216293484e", 00:13:27.659 "assigned_rate_limits": { 00:13:27.659 "rw_ios_per_sec": 0, 00:13:27.659 "rw_mbytes_per_sec": 0, 00:13:27.659 "r_mbytes_per_sec": 0, 00:13:27.659 "w_mbytes_per_sec": 0 00:13:27.659 }, 00:13:27.659 "claimed": true, 00:13:27.659 "claim_type": "exclusive_write", 00:13:27.659 "zoned": false, 00:13:27.659 "supported_io_types": { 00:13:27.659 "read": true, 00:13:27.659 "write": true, 00:13:27.659 "unmap": true, 00:13:27.659 "flush": true, 00:13:27.659 "reset": true, 00:13:27.659 "nvme_admin": false, 00:13:27.659 "nvme_io": false, 00:13:27.659 "nvme_io_md": false, 00:13:27.659 "write_zeroes": true, 00:13:27.659 "zcopy": true, 00:13:27.659 "get_zone_info": false, 00:13:27.659 "zone_management": false, 00:13:27.659 "zone_append": false, 00:13:27.659 "compare": false, 00:13:27.659 "compare_and_write": false, 00:13:27.659 "abort": true, 00:13:27.659 "seek_hole": false, 00:13:27.659 "seek_data": false, 00:13:27.659 "copy": true, 00:13:27.659 "nvme_iov_md": false 00:13:27.659 }, 00:13:27.659 "memory_domains": [ 00:13:27.659 { 00:13:27.659 "dma_device_id": "system", 00:13:27.659 "dma_device_type": 1 00:13:27.659 }, 00:13:27.659 { 00:13:27.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.659 "dma_device_type": 2 00:13:27.659 } 00:13:27.659 ], 00:13:27.659 "driver_specific": {} 00:13:27.659 } 00:13:27.660 ] 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.660 "name": "Existed_Raid", 00:13:27.660 "uuid": "13196d8c-f8d8-4675-a0da-04b439136a06", 00:13:27.660 "strip_size_kb": 64, 00:13:27.660 "state": "online", 00:13:27.660 "raid_level": "raid5f", 00:13:27.660 "superblock": false, 00:13:27.660 "num_base_bdevs": 3, 00:13:27.660 "num_base_bdevs_discovered": 3, 00:13:27.660 "num_base_bdevs_operational": 3, 00:13:27.660 "base_bdevs_list": [ 00:13:27.660 { 00:13:27.660 "name": "NewBaseBdev", 00:13:27.660 "uuid": "d4a170fa-f34d-4e4a-ab66-35216293484e", 00:13:27.660 "is_configured": true, 00:13:27.660 "data_offset": 0, 00:13:27.660 "data_size": 65536 00:13:27.660 }, 00:13:27.660 { 00:13:27.660 "name": "BaseBdev2", 00:13:27.660 "uuid": "082e1ed4-6069-4645-8062-98e46279dbfa", 00:13:27.660 "is_configured": true, 00:13:27.660 "data_offset": 0, 00:13:27.660 "data_size": 65536 00:13:27.660 }, 00:13:27.660 { 00:13:27.660 "name": "BaseBdev3", 00:13:27.660 "uuid": "b561e095-d8cb-4cd5-9f42-e256338ad7f8", 00:13:27.660 "is_configured": true, 00:13:27.660 "data_offset": 0, 00:13:27.660 "data_size": 65536 00:13:27.660 } 00:13:27.660 ] 00:13:27.660 }' 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.660 23:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.231 [2024-11-18 23:08:47.385649] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:28.231 "name": "Existed_Raid", 00:13:28.231 "aliases": [ 00:13:28.231 "13196d8c-f8d8-4675-a0da-04b439136a06" 00:13:28.231 ], 00:13:28.231 "product_name": "Raid Volume", 00:13:28.231 "block_size": 512, 00:13:28.231 "num_blocks": 131072, 00:13:28.231 "uuid": "13196d8c-f8d8-4675-a0da-04b439136a06", 00:13:28.231 "assigned_rate_limits": { 00:13:28.231 "rw_ios_per_sec": 0, 00:13:28.231 "rw_mbytes_per_sec": 0, 00:13:28.231 "r_mbytes_per_sec": 0, 00:13:28.231 "w_mbytes_per_sec": 0 00:13:28.231 }, 00:13:28.231 "claimed": false, 00:13:28.231 "zoned": false, 00:13:28.231 "supported_io_types": { 00:13:28.231 "read": true, 00:13:28.231 "write": true, 00:13:28.231 "unmap": false, 00:13:28.231 "flush": false, 00:13:28.231 "reset": true, 00:13:28.231 "nvme_admin": false, 00:13:28.231 "nvme_io": false, 00:13:28.231 "nvme_io_md": false, 00:13:28.231 "write_zeroes": true, 00:13:28.231 "zcopy": false, 00:13:28.231 "get_zone_info": false, 00:13:28.231 "zone_management": false, 00:13:28.231 "zone_append": false, 00:13:28.231 "compare": false, 00:13:28.231 "compare_and_write": false, 00:13:28.231 "abort": false, 00:13:28.231 "seek_hole": false, 00:13:28.231 "seek_data": false, 00:13:28.231 "copy": false, 00:13:28.231 "nvme_iov_md": false 00:13:28.231 }, 00:13:28.231 "driver_specific": { 00:13:28.231 "raid": { 00:13:28.231 "uuid": "13196d8c-f8d8-4675-a0da-04b439136a06", 00:13:28.231 "strip_size_kb": 64, 00:13:28.231 "state": "online", 00:13:28.231 "raid_level": "raid5f", 00:13:28.231 "superblock": false, 00:13:28.231 "num_base_bdevs": 3, 00:13:28.231 "num_base_bdevs_discovered": 3, 00:13:28.231 "num_base_bdevs_operational": 3, 00:13:28.231 "base_bdevs_list": [ 00:13:28.231 { 00:13:28.231 "name": "NewBaseBdev", 00:13:28.231 "uuid": "d4a170fa-f34d-4e4a-ab66-35216293484e", 00:13:28.231 "is_configured": true, 00:13:28.231 "data_offset": 0, 00:13:28.231 "data_size": 65536 00:13:28.231 }, 00:13:28.231 { 00:13:28.231 "name": "BaseBdev2", 00:13:28.231 "uuid": "082e1ed4-6069-4645-8062-98e46279dbfa", 00:13:28.231 "is_configured": true, 00:13:28.231 "data_offset": 0, 00:13:28.231 "data_size": 65536 00:13:28.231 }, 00:13:28.231 { 00:13:28.231 "name": "BaseBdev3", 00:13:28.231 "uuid": "b561e095-d8cb-4cd5-9f42-e256338ad7f8", 00:13:28.231 "is_configured": true, 00:13:28.231 "data_offset": 0, 00:13:28.231 "data_size": 65536 00:13:28.231 } 00:13:28.231 ] 00:13:28.231 } 00:13:28.231 } 00:13:28.231 }' 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:28.231 BaseBdev2 00:13:28.231 BaseBdev3' 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.231 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.491 [2024-11-18 23:08:47.661013] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:28.491 [2024-11-18 23:08:47.661037] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.491 [2024-11-18 23:08:47.661093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.491 [2024-11-18 23:08:47.661323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.491 [2024-11-18 23:08:47.661336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90403 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90403 ']' 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90403 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90403 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90403' 00:13:28.491 killing process with pid 90403 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90403 00:13:28.491 [2024-11-18 23:08:47.710520] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.491 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90403 00:13:28.491 [2024-11-18 23:08:47.740420] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.752 23:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:28.752 00:13:28.752 real 0m9.186s 00:13:28.752 user 0m15.670s 00:13:28.752 sys 0m1.983s 00:13:28.752 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.752 23:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.752 ************************************ 00:13:28.752 END TEST raid5f_state_function_test 00:13:28.752 ************************************ 00:13:28.752 23:08:48 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:28.752 23:08:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:28.752 23:08:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.752 23:08:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.752 ************************************ 00:13:28.752 START TEST raid5f_state_function_test_sb 00:13:28.752 ************************************ 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91013 00:13:28.752 Process raid pid: 91013 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91013' 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91013 00:13:28.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91013 ']' 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:28.752 23:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.019 [2024-11-18 23:08:48.170303] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:29.019 [2024-11-18 23:08:48.170420] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.019 [2024-11-18 23:08:48.330439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.019 [2024-11-18 23:08:48.376172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.283 [2024-11-18 23:08:48.419693] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.283 [2024-11-18 23:08:48.419730] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.853 23:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.853 23:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:29.853 23:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:29.853 23:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.853 23:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.853 [2024-11-18 23:08:49.001732] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:29.853 [2024-11-18 23:08:49.001783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:29.853 [2024-11-18 23:08:49.001797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.853 [2024-11-18 23:08:49.001806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.853 [2024-11-18 23:08:49.001812] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:29.853 [2024-11-18 23:08:49.001822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.853 "name": "Existed_Raid", 00:13:29.853 "uuid": "6c468767-b2a6-4fd4-ac6f-b876571cf9c8", 00:13:29.853 "strip_size_kb": 64, 00:13:29.853 "state": "configuring", 00:13:29.853 "raid_level": "raid5f", 00:13:29.853 "superblock": true, 00:13:29.853 "num_base_bdevs": 3, 00:13:29.853 "num_base_bdevs_discovered": 0, 00:13:29.853 "num_base_bdevs_operational": 3, 00:13:29.853 "base_bdevs_list": [ 00:13:29.853 { 00:13:29.853 "name": "BaseBdev1", 00:13:29.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.853 "is_configured": false, 00:13:29.853 "data_offset": 0, 00:13:29.853 "data_size": 0 00:13:29.853 }, 00:13:29.853 { 00:13:29.853 "name": "BaseBdev2", 00:13:29.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.853 "is_configured": false, 00:13:29.853 "data_offset": 0, 00:13:29.853 "data_size": 0 00:13:29.853 }, 00:13:29.853 { 00:13:29.853 "name": "BaseBdev3", 00:13:29.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.853 "is_configured": false, 00:13:29.853 "data_offset": 0, 00:13:29.853 "data_size": 0 00:13:29.853 } 00:13:29.853 ] 00:13:29.853 }' 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.853 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.113 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.113 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.113 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.113 [2024-11-18 23:08:49.460789] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.113 [2024-11-18 23:08:49.460824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:30.113 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.113 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:30.113 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.113 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.113 [2024-11-18 23:08:49.472805] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:30.113 [2024-11-18 23:08:49.472915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:30.113 [2024-11-18 23:08:49.472927] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:30.113 [2024-11-18 23:08:49.472936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:30.113 [2024-11-18 23:08:49.472942] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:30.113 [2024-11-18 23:08:49.472951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:30.113 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.113 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:30.113 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.113 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.373 [2024-11-18 23:08:49.493917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.373 BaseBdev1 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.373 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.373 [ 00:13:30.373 { 00:13:30.373 "name": "BaseBdev1", 00:13:30.373 "aliases": [ 00:13:30.373 "da769bde-9737-4eb7-9d35-2e568ec92c06" 00:13:30.373 ], 00:13:30.373 "product_name": "Malloc disk", 00:13:30.373 "block_size": 512, 00:13:30.373 "num_blocks": 65536, 00:13:30.373 "uuid": "da769bde-9737-4eb7-9d35-2e568ec92c06", 00:13:30.373 "assigned_rate_limits": { 00:13:30.373 "rw_ios_per_sec": 0, 00:13:30.373 "rw_mbytes_per_sec": 0, 00:13:30.373 "r_mbytes_per_sec": 0, 00:13:30.373 "w_mbytes_per_sec": 0 00:13:30.373 }, 00:13:30.373 "claimed": true, 00:13:30.373 "claim_type": "exclusive_write", 00:13:30.373 "zoned": false, 00:13:30.373 "supported_io_types": { 00:13:30.373 "read": true, 00:13:30.373 "write": true, 00:13:30.373 "unmap": true, 00:13:30.373 "flush": true, 00:13:30.373 "reset": true, 00:13:30.373 "nvme_admin": false, 00:13:30.373 "nvme_io": false, 00:13:30.373 "nvme_io_md": false, 00:13:30.373 "write_zeroes": true, 00:13:30.373 "zcopy": true, 00:13:30.373 "get_zone_info": false, 00:13:30.373 "zone_management": false, 00:13:30.373 "zone_append": false, 00:13:30.373 "compare": false, 00:13:30.373 "compare_and_write": false, 00:13:30.373 "abort": true, 00:13:30.373 "seek_hole": false, 00:13:30.373 "seek_data": false, 00:13:30.373 "copy": true, 00:13:30.373 "nvme_iov_md": false 00:13:30.373 }, 00:13:30.373 "memory_domains": [ 00:13:30.373 { 00:13:30.373 "dma_device_id": "system", 00:13:30.374 "dma_device_type": 1 00:13:30.374 }, 00:13:30.374 { 00:13:30.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.374 "dma_device_type": 2 00:13:30.374 } 00:13:30.374 ], 00:13:30.374 "driver_specific": {} 00:13:30.374 } 00:13:30.374 ] 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.374 "name": "Existed_Raid", 00:13:30.374 "uuid": "bc53c456-2713-47f9-bd5f-a8df6cea97b8", 00:13:30.374 "strip_size_kb": 64, 00:13:30.374 "state": "configuring", 00:13:30.374 "raid_level": "raid5f", 00:13:30.374 "superblock": true, 00:13:30.374 "num_base_bdevs": 3, 00:13:30.374 "num_base_bdevs_discovered": 1, 00:13:30.374 "num_base_bdevs_operational": 3, 00:13:30.374 "base_bdevs_list": [ 00:13:30.374 { 00:13:30.374 "name": "BaseBdev1", 00:13:30.374 "uuid": "da769bde-9737-4eb7-9d35-2e568ec92c06", 00:13:30.374 "is_configured": true, 00:13:30.374 "data_offset": 2048, 00:13:30.374 "data_size": 63488 00:13:30.374 }, 00:13:30.374 { 00:13:30.374 "name": "BaseBdev2", 00:13:30.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.374 "is_configured": false, 00:13:30.374 "data_offset": 0, 00:13:30.374 "data_size": 0 00:13:30.374 }, 00:13:30.374 { 00:13:30.374 "name": "BaseBdev3", 00:13:30.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.374 "is_configured": false, 00:13:30.374 "data_offset": 0, 00:13:30.374 "data_size": 0 00:13:30.374 } 00:13:30.374 ] 00:13:30.374 }' 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.374 23:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.944 [2024-11-18 23:08:50.028990] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.944 [2024-11-18 23:08:50.029028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.944 [2024-11-18 23:08:50.041016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.944 [2024-11-18 23:08:50.042708] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:30.944 [2024-11-18 23:08:50.042742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:30.944 [2024-11-18 23:08:50.042750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:30.944 [2024-11-18 23:08:50.042759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.944 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.945 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.945 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.945 "name": "Existed_Raid", 00:13:30.945 "uuid": "08967771-3067-4f28-b4c1-a233bec8a01e", 00:13:30.945 "strip_size_kb": 64, 00:13:30.945 "state": "configuring", 00:13:30.945 "raid_level": "raid5f", 00:13:30.945 "superblock": true, 00:13:30.945 "num_base_bdevs": 3, 00:13:30.945 "num_base_bdevs_discovered": 1, 00:13:30.945 "num_base_bdevs_operational": 3, 00:13:30.945 "base_bdevs_list": [ 00:13:30.945 { 00:13:30.945 "name": "BaseBdev1", 00:13:30.945 "uuid": "da769bde-9737-4eb7-9d35-2e568ec92c06", 00:13:30.945 "is_configured": true, 00:13:30.945 "data_offset": 2048, 00:13:30.945 "data_size": 63488 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "name": "BaseBdev2", 00:13:30.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.945 "is_configured": false, 00:13:30.945 "data_offset": 0, 00:13:30.945 "data_size": 0 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "name": "BaseBdev3", 00:13:30.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.945 "is_configured": false, 00:13:30.945 "data_offset": 0, 00:13:30.945 "data_size": 0 00:13:30.945 } 00:13:30.945 ] 00:13:30.945 }' 00:13:30.945 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.945 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.205 [2024-11-18 23:08:50.478688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.205 BaseBdev2 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.205 [ 00:13:31.205 { 00:13:31.205 "name": "BaseBdev2", 00:13:31.205 "aliases": [ 00:13:31.205 "e57d7993-b3f3-4e53-91e8-4f0b30efb976" 00:13:31.205 ], 00:13:31.205 "product_name": "Malloc disk", 00:13:31.205 "block_size": 512, 00:13:31.205 "num_blocks": 65536, 00:13:31.205 "uuid": "e57d7993-b3f3-4e53-91e8-4f0b30efb976", 00:13:31.205 "assigned_rate_limits": { 00:13:31.205 "rw_ios_per_sec": 0, 00:13:31.205 "rw_mbytes_per_sec": 0, 00:13:31.205 "r_mbytes_per_sec": 0, 00:13:31.205 "w_mbytes_per_sec": 0 00:13:31.205 }, 00:13:31.205 "claimed": true, 00:13:31.205 "claim_type": "exclusive_write", 00:13:31.205 "zoned": false, 00:13:31.205 "supported_io_types": { 00:13:31.205 "read": true, 00:13:31.205 "write": true, 00:13:31.205 "unmap": true, 00:13:31.205 "flush": true, 00:13:31.205 "reset": true, 00:13:31.205 "nvme_admin": false, 00:13:31.205 "nvme_io": false, 00:13:31.205 "nvme_io_md": false, 00:13:31.205 "write_zeroes": true, 00:13:31.205 "zcopy": true, 00:13:31.205 "get_zone_info": false, 00:13:31.205 "zone_management": false, 00:13:31.205 "zone_append": false, 00:13:31.205 "compare": false, 00:13:31.205 "compare_and_write": false, 00:13:31.205 "abort": true, 00:13:31.205 "seek_hole": false, 00:13:31.205 "seek_data": false, 00:13:31.205 "copy": true, 00:13:31.205 "nvme_iov_md": false 00:13:31.205 }, 00:13:31.205 "memory_domains": [ 00:13:31.205 { 00:13:31.205 "dma_device_id": "system", 00:13:31.205 "dma_device_type": 1 00:13:31.205 }, 00:13:31.205 { 00:13:31.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.205 "dma_device_type": 2 00:13:31.205 } 00:13:31.205 ], 00:13:31.205 "driver_specific": {} 00:13:31.205 } 00:13:31.205 ] 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.205 "name": "Existed_Raid", 00:13:31.205 "uuid": "08967771-3067-4f28-b4c1-a233bec8a01e", 00:13:31.205 "strip_size_kb": 64, 00:13:31.205 "state": "configuring", 00:13:31.205 "raid_level": "raid5f", 00:13:31.205 "superblock": true, 00:13:31.205 "num_base_bdevs": 3, 00:13:31.205 "num_base_bdevs_discovered": 2, 00:13:31.205 "num_base_bdevs_operational": 3, 00:13:31.205 "base_bdevs_list": [ 00:13:31.205 { 00:13:31.205 "name": "BaseBdev1", 00:13:31.205 "uuid": "da769bde-9737-4eb7-9d35-2e568ec92c06", 00:13:31.205 "is_configured": true, 00:13:31.205 "data_offset": 2048, 00:13:31.205 "data_size": 63488 00:13:31.205 }, 00:13:31.205 { 00:13:31.205 "name": "BaseBdev2", 00:13:31.205 "uuid": "e57d7993-b3f3-4e53-91e8-4f0b30efb976", 00:13:31.205 "is_configured": true, 00:13:31.205 "data_offset": 2048, 00:13:31.205 "data_size": 63488 00:13:31.205 }, 00:13:31.205 { 00:13:31.205 "name": "BaseBdev3", 00:13:31.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.205 "is_configured": false, 00:13:31.205 "data_offset": 0, 00:13:31.205 "data_size": 0 00:13:31.205 } 00:13:31.205 ] 00:13:31.205 }' 00:13:31.205 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.206 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.775 [2024-11-18 23:08:50.988771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.775 [2024-11-18 23:08:50.989043] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:31.775 [2024-11-18 23:08:50.989088] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:31.775 [2024-11-18 23:08:50.989432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:31.775 BaseBdev3 00:13:31.775 [2024-11-18 23:08:50.989881] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:31.775 [2024-11-18 23:08:50.989941] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:31.775 [2024-11-18 23:08:50.990109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.775 23:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.775 [ 00:13:31.775 { 00:13:31.775 "name": "BaseBdev3", 00:13:31.775 "aliases": [ 00:13:31.775 "abe7b09b-e504-4248-a788-24445b82d661" 00:13:31.775 ], 00:13:31.775 "product_name": "Malloc disk", 00:13:31.775 "block_size": 512, 00:13:31.775 "num_blocks": 65536, 00:13:31.775 "uuid": "abe7b09b-e504-4248-a788-24445b82d661", 00:13:31.775 "assigned_rate_limits": { 00:13:31.775 "rw_ios_per_sec": 0, 00:13:31.775 "rw_mbytes_per_sec": 0, 00:13:31.775 "r_mbytes_per_sec": 0, 00:13:31.775 "w_mbytes_per_sec": 0 00:13:31.775 }, 00:13:31.775 "claimed": true, 00:13:31.775 "claim_type": "exclusive_write", 00:13:31.775 "zoned": false, 00:13:31.775 "supported_io_types": { 00:13:31.775 "read": true, 00:13:31.775 "write": true, 00:13:31.775 "unmap": true, 00:13:31.775 "flush": true, 00:13:31.775 "reset": true, 00:13:31.775 "nvme_admin": false, 00:13:31.775 "nvme_io": false, 00:13:31.775 "nvme_io_md": false, 00:13:31.775 "write_zeroes": true, 00:13:31.775 "zcopy": true, 00:13:31.775 "get_zone_info": false, 00:13:31.775 "zone_management": false, 00:13:31.775 "zone_append": false, 00:13:31.775 "compare": false, 00:13:31.775 "compare_and_write": false, 00:13:31.775 "abort": true, 00:13:31.775 "seek_hole": false, 00:13:31.775 "seek_data": false, 00:13:31.775 "copy": true, 00:13:31.775 "nvme_iov_md": false 00:13:31.775 }, 00:13:31.775 "memory_domains": [ 00:13:31.775 { 00:13:31.775 "dma_device_id": "system", 00:13:31.775 "dma_device_type": 1 00:13:31.775 }, 00:13:31.775 { 00:13:31.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.775 "dma_device_type": 2 00:13:31.775 } 00:13:31.775 ], 00:13:31.775 "driver_specific": {} 00:13:31.775 } 00:13:31.775 ] 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.775 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.775 "name": "Existed_Raid", 00:13:31.775 "uuid": "08967771-3067-4f28-b4c1-a233bec8a01e", 00:13:31.775 "strip_size_kb": 64, 00:13:31.775 "state": "online", 00:13:31.775 "raid_level": "raid5f", 00:13:31.775 "superblock": true, 00:13:31.775 "num_base_bdevs": 3, 00:13:31.775 "num_base_bdevs_discovered": 3, 00:13:31.775 "num_base_bdevs_operational": 3, 00:13:31.775 "base_bdevs_list": [ 00:13:31.775 { 00:13:31.775 "name": "BaseBdev1", 00:13:31.775 "uuid": "da769bde-9737-4eb7-9d35-2e568ec92c06", 00:13:31.775 "is_configured": true, 00:13:31.775 "data_offset": 2048, 00:13:31.775 "data_size": 63488 00:13:31.775 }, 00:13:31.775 { 00:13:31.775 "name": "BaseBdev2", 00:13:31.775 "uuid": "e57d7993-b3f3-4e53-91e8-4f0b30efb976", 00:13:31.775 "is_configured": true, 00:13:31.776 "data_offset": 2048, 00:13:31.776 "data_size": 63488 00:13:31.776 }, 00:13:31.776 { 00:13:31.776 "name": "BaseBdev3", 00:13:31.776 "uuid": "abe7b09b-e504-4248-a788-24445b82d661", 00:13:31.776 "is_configured": true, 00:13:31.776 "data_offset": 2048, 00:13:31.776 "data_size": 63488 00:13:31.776 } 00:13:31.776 ] 00:13:31.776 }' 00:13:31.776 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.776 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.345 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:32.345 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:32.345 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:32.345 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:32.345 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:32.345 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:32.345 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.346 [2024-11-18 23:08:51.508088] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:32.346 "name": "Existed_Raid", 00:13:32.346 "aliases": [ 00:13:32.346 "08967771-3067-4f28-b4c1-a233bec8a01e" 00:13:32.346 ], 00:13:32.346 "product_name": "Raid Volume", 00:13:32.346 "block_size": 512, 00:13:32.346 "num_blocks": 126976, 00:13:32.346 "uuid": "08967771-3067-4f28-b4c1-a233bec8a01e", 00:13:32.346 "assigned_rate_limits": { 00:13:32.346 "rw_ios_per_sec": 0, 00:13:32.346 "rw_mbytes_per_sec": 0, 00:13:32.346 "r_mbytes_per_sec": 0, 00:13:32.346 "w_mbytes_per_sec": 0 00:13:32.346 }, 00:13:32.346 "claimed": false, 00:13:32.346 "zoned": false, 00:13:32.346 "supported_io_types": { 00:13:32.346 "read": true, 00:13:32.346 "write": true, 00:13:32.346 "unmap": false, 00:13:32.346 "flush": false, 00:13:32.346 "reset": true, 00:13:32.346 "nvme_admin": false, 00:13:32.346 "nvme_io": false, 00:13:32.346 "nvme_io_md": false, 00:13:32.346 "write_zeroes": true, 00:13:32.346 "zcopy": false, 00:13:32.346 "get_zone_info": false, 00:13:32.346 "zone_management": false, 00:13:32.346 "zone_append": false, 00:13:32.346 "compare": false, 00:13:32.346 "compare_and_write": false, 00:13:32.346 "abort": false, 00:13:32.346 "seek_hole": false, 00:13:32.346 "seek_data": false, 00:13:32.346 "copy": false, 00:13:32.346 "nvme_iov_md": false 00:13:32.346 }, 00:13:32.346 "driver_specific": { 00:13:32.346 "raid": { 00:13:32.346 "uuid": "08967771-3067-4f28-b4c1-a233bec8a01e", 00:13:32.346 "strip_size_kb": 64, 00:13:32.346 "state": "online", 00:13:32.346 "raid_level": "raid5f", 00:13:32.346 "superblock": true, 00:13:32.346 "num_base_bdevs": 3, 00:13:32.346 "num_base_bdevs_discovered": 3, 00:13:32.346 "num_base_bdevs_operational": 3, 00:13:32.346 "base_bdevs_list": [ 00:13:32.346 { 00:13:32.346 "name": "BaseBdev1", 00:13:32.346 "uuid": "da769bde-9737-4eb7-9d35-2e568ec92c06", 00:13:32.346 "is_configured": true, 00:13:32.346 "data_offset": 2048, 00:13:32.346 "data_size": 63488 00:13:32.346 }, 00:13:32.346 { 00:13:32.346 "name": "BaseBdev2", 00:13:32.346 "uuid": "e57d7993-b3f3-4e53-91e8-4f0b30efb976", 00:13:32.346 "is_configured": true, 00:13:32.346 "data_offset": 2048, 00:13:32.346 "data_size": 63488 00:13:32.346 }, 00:13:32.346 { 00:13:32.346 "name": "BaseBdev3", 00:13:32.346 "uuid": "abe7b09b-e504-4248-a788-24445b82d661", 00:13:32.346 "is_configured": true, 00:13:32.346 "data_offset": 2048, 00:13:32.346 "data_size": 63488 00:13:32.346 } 00:13:32.346 ] 00:13:32.346 } 00:13:32.346 } 00:13:32.346 }' 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:32.346 BaseBdev2 00:13:32.346 BaseBdev3' 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.346 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.607 [2024-11-18 23:08:51.779525] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:32.607 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.608 "name": "Existed_Raid", 00:13:32.608 "uuid": "08967771-3067-4f28-b4c1-a233bec8a01e", 00:13:32.608 "strip_size_kb": 64, 00:13:32.608 "state": "online", 00:13:32.608 "raid_level": "raid5f", 00:13:32.608 "superblock": true, 00:13:32.608 "num_base_bdevs": 3, 00:13:32.608 "num_base_bdevs_discovered": 2, 00:13:32.608 "num_base_bdevs_operational": 2, 00:13:32.608 "base_bdevs_list": [ 00:13:32.608 { 00:13:32.608 "name": null, 00:13:32.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.608 "is_configured": false, 00:13:32.608 "data_offset": 0, 00:13:32.608 "data_size": 63488 00:13:32.608 }, 00:13:32.608 { 00:13:32.608 "name": "BaseBdev2", 00:13:32.608 "uuid": "e57d7993-b3f3-4e53-91e8-4f0b30efb976", 00:13:32.608 "is_configured": true, 00:13:32.608 "data_offset": 2048, 00:13:32.608 "data_size": 63488 00:13:32.608 }, 00:13:32.608 { 00:13:32.608 "name": "BaseBdev3", 00:13:32.608 "uuid": "abe7b09b-e504-4248-a788-24445b82d661", 00:13:32.608 "is_configured": true, 00:13:32.608 "data_offset": 2048, 00:13:32.608 "data_size": 63488 00:13:32.608 } 00:13:32.608 ] 00:13:32.608 }' 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.608 23:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.867 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:32.867 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.867 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:32.867 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.867 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.867 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.127 [2024-11-18 23:08:52.274057] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:33.127 [2024-11-18 23:08:52.274177] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.127 [2024-11-18 23:08:52.284909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.127 [2024-11-18 23:08:52.344830] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:33.127 [2024-11-18 23:08:52.344871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:33.127 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.128 BaseBdev2 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.128 [ 00:13:33.128 { 00:13:33.128 "name": "BaseBdev2", 00:13:33.128 "aliases": [ 00:13:33.128 "034614b9-86e1-49fe-a782-ebfb99c1d926" 00:13:33.128 ], 00:13:33.128 "product_name": "Malloc disk", 00:13:33.128 "block_size": 512, 00:13:33.128 "num_blocks": 65536, 00:13:33.128 "uuid": "034614b9-86e1-49fe-a782-ebfb99c1d926", 00:13:33.128 "assigned_rate_limits": { 00:13:33.128 "rw_ios_per_sec": 0, 00:13:33.128 "rw_mbytes_per_sec": 0, 00:13:33.128 "r_mbytes_per_sec": 0, 00:13:33.128 "w_mbytes_per_sec": 0 00:13:33.128 }, 00:13:33.128 "claimed": false, 00:13:33.128 "zoned": false, 00:13:33.128 "supported_io_types": { 00:13:33.128 "read": true, 00:13:33.128 "write": true, 00:13:33.128 "unmap": true, 00:13:33.128 "flush": true, 00:13:33.128 "reset": true, 00:13:33.128 "nvme_admin": false, 00:13:33.128 "nvme_io": false, 00:13:33.128 "nvme_io_md": false, 00:13:33.128 "write_zeroes": true, 00:13:33.128 "zcopy": true, 00:13:33.128 "get_zone_info": false, 00:13:33.128 "zone_management": false, 00:13:33.128 "zone_append": false, 00:13:33.128 "compare": false, 00:13:33.128 "compare_and_write": false, 00:13:33.128 "abort": true, 00:13:33.128 "seek_hole": false, 00:13:33.128 "seek_data": false, 00:13:33.128 "copy": true, 00:13:33.128 "nvme_iov_md": false 00:13:33.128 }, 00:13:33.128 "memory_domains": [ 00:13:33.128 { 00:13:33.128 "dma_device_id": "system", 00:13:33.128 "dma_device_type": 1 00:13:33.128 }, 00:13:33.128 { 00:13:33.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.128 "dma_device_type": 2 00:13:33.128 } 00:13:33.128 ], 00:13:33.128 "driver_specific": {} 00:13:33.128 } 00:13:33.128 ] 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.128 BaseBdev3 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.128 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.128 [ 00:13:33.128 { 00:13:33.128 "name": "BaseBdev3", 00:13:33.128 "aliases": [ 00:13:33.128 "bf84d5cb-df7d-429d-882f-e04bbcdd66fd" 00:13:33.128 ], 00:13:33.128 "product_name": "Malloc disk", 00:13:33.128 "block_size": 512, 00:13:33.128 "num_blocks": 65536, 00:13:33.128 "uuid": "bf84d5cb-df7d-429d-882f-e04bbcdd66fd", 00:13:33.128 "assigned_rate_limits": { 00:13:33.128 "rw_ios_per_sec": 0, 00:13:33.128 "rw_mbytes_per_sec": 0, 00:13:33.128 "r_mbytes_per_sec": 0, 00:13:33.128 "w_mbytes_per_sec": 0 00:13:33.128 }, 00:13:33.128 "claimed": false, 00:13:33.388 "zoned": false, 00:13:33.388 "supported_io_types": { 00:13:33.388 "read": true, 00:13:33.388 "write": true, 00:13:33.388 "unmap": true, 00:13:33.388 "flush": true, 00:13:33.388 "reset": true, 00:13:33.388 "nvme_admin": false, 00:13:33.388 "nvme_io": false, 00:13:33.388 "nvme_io_md": false, 00:13:33.388 "write_zeroes": true, 00:13:33.388 "zcopy": true, 00:13:33.388 "get_zone_info": false, 00:13:33.388 "zone_management": false, 00:13:33.388 "zone_append": false, 00:13:33.388 "compare": false, 00:13:33.388 "compare_and_write": false, 00:13:33.388 "abort": true, 00:13:33.388 "seek_hole": false, 00:13:33.388 "seek_data": false, 00:13:33.388 "copy": true, 00:13:33.388 "nvme_iov_md": false 00:13:33.388 }, 00:13:33.388 "memory_domains": [ 00:13:33.388 { 00:13:33.388 "dma_device_id": "system", 00:13:33.388 "dma_device_type": 1 00:13:33.388 }, 00:13:33.388 { 00:13:33.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.388 "dma_device_type": 2 00:13:33.388 } 00:13:33.388 ], 00:13:33.388 "driver_specific": {} 00:13:33.388 } 00:13:33.388 ] 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.388 [2024-11-18 23:08:52.519492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:33.388 [2024-11-18 23:08:52.519609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:33.388 [2024-11-18 23:08:52.519652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.388 [2024-11-18 23:08:52.521496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.388 "name": "Existed_Raid", 00:13:33.388 "uuid": "53c0ece0-8b70-4c03-9eab-20b3fc5b350a", 00:13:33.388 "strip_size_kb": 64, 00:13:33.388 "state": "configuring", 00:13:33.388 "raid_level": "raid5f", 00:13:33.388 "superblock": true, 00:13:33.388 "num_base_bdevs": 3, 00:13:33.388 "num_base_bdevs_discovered": 2, 00:13:33.388 "num_base_bdevs_operational": 3, 00:13:33.388 "base_bdevs_list": [ 00:13:33.388 { 00:13:33.388 "name": "BaseBdev1", 00:13:33.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.388 "is_configured": false, 00:13:33.388 "data_offset": 0, 00:13:33.388 "data_size": 0 00:13:33.388 }, 00:13:33.388 { 00:13:33.388 "name": "BaseBdev2", 00:13:33.388 "uuid": "034614b9-86e1-49fe-a782-ebfb99c1d926", 00:13:33.388 "is_configured": true, 00:13:33.388 "data_offset": 2048, 00:13:33.388 "data_size": 63488 00:13:33.388 }, 00:13:33.388 { 00:13:33.388 "name": "BaseBdev3", 00:13:33.388 "uuid": "bf84d5cb-df7d-429d-882f-e04bbcdd66fd", 00:13:33.388 "is_configured": true, 00:13:33.388 "data_offset": 2048, 00:13:33.388 "data_size": 63488 00:13:33.388 } 00:13:33.388 ] 00:13:33.388 }' 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.388 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.648 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:33.648 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.648 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.648 [2024-11-18 23:08:52.923431] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:33.648 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.648 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:33.648 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.648 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.648 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.649 "name": "Existed_Raid", 00:13:33.649 "uuid": "53c0ece0-8b70-4c03-9eab-20b3fc5b350a", 00:13:33.649 "strip_size_kb": 64, 00:13:33.649 "state": "configuring", 00:13:33.649 "raid_level": "raid5f", 00:13:33.649 "superblock": true, 00:13:33.649 "num_base_bdevs": 3, 00:13:33.649 "num_base_bdevs_discovered": 1, 00:13:33.649 "num_base_bdevs_operational": 3, 00:13:33.649 "base_bdevs_list": [ 00:13:33.649 { 00:13:33.649 "name": "BaseBdev1", 00:13:33.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.649 "is_configured": false, 00:13:33.649 "data_offset": 0, 00:13:33.649 "data_size": 0 00:13:33.649 }, 00:13:33.649 { 00:13:33.649 "name": null, 00:13:33.649 "uuid": "034614b9-86e1-49fe-a782-ebfb99c1d926", 00:13:33.649 "is_configured": false, 00:13:33.649 "data_offset": 0, 00:13:33.649 "data_size": 63488 00:13:33.649 }, 00:13:33.649 { 00:13:33.649 "name": "BaseBdev3", 00:13:33.649 "uuid": "bf84d5cb-df7d-429d-882f-e04bbcdd66fd", 00:13:33.649 "is_configured": true, 00:13:33.649 "data_offset": 2048, 00:13:33.649 "data_size": 63488 00:13:33.649 } 00:13:33.649 ] 00:13:33.649 }' 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.649 23:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.217 [2024-11-18 23:08:53.446225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.217 BaseBdev1 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.217 [ 00:13:34.217 { 00:13:34.217 "name": "BaseBdev1", 00:13:34.217 "aliases": [ 00:13:34.217 "d10c55bd-36d6-4aae-8662-0538f672eceb" 00:13:34.217 ], 00:13:34.217 "product_name": "Malloc disk", 00:13:34.217 "block_size": 512, 00:13:34.217 "num_blocks": 65536, 00:13:34.217 "uuid": "d10c55bd-36d6-4aae-8662-0538f672eceb", 00:13:34.217 "assigned_rate_limits": { 00:13:34.217 "rw_ios_per_sec": 0, 00:13:34.217 "rw_mbytes_per_sec": 0, 00:13:34.217 "r_mbytes_per_sec": 0, 00:13:34.217 "w_mbytes_per_sec": 0 00:13:34.217 }, 00:13:34.217 "claimed": true, 00:13:34.217 "claim_type": "exclusive_write", 00:13:34.217 "zoned": false, 00:13:34.217 "supported_io_types": { 00:13:34.217 "read": true, 00:13:34.217 "write": true, 00:13:34.217 "unmap": true, 00:13:34.217 "flush": true, 00:13:34.217 "reset": true, 00:13:34.217 "nvme_admin": false, 00:13:34.217 "nvme_io": false, 00:13:34.217 "nvme_io_md": false, 00:13:34.217 "write_zeroes": true, 00:13:34.217 "zcopy": true, 00:13:34.217 "get_zone_info": false, 00:13:34.217 "zone_management": false, 00:13:34.217 "zone_append": false, 00:13:34.217 "compare": false, 00:13:34.217 "compare_and_write": false, 00:13:34.217 "abort": true, 00:13:34.217 "seek_hole": false, 00:13:34.217 "seek_data": false, 00:13:34.217 "copy": true, 00:13:34.217 "nvme_iov_md": false 00:13:34.217 }, 00:13:34.217 "memory_domains": [ 00:13:34.217 { 00:13:34.217 "dma_device_id": "system", 00:13:34.217 "dma_device_type": 1 00:13:34.217 }, 00:13:34.217 { 00:13:34.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.217 "dma_device_type": 2 00:13:34.217 } 00:13:34.217 ], 00:13:34.217 "driver_specific": {} 00:13:34.217 } 00:13:34.217 ] 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.217 "name": "Existed_Raid", 00:13:34.217 "uuid": "53c0ece0-8b70-4c03-9eab-20b3fc5b350a", 00:13:34.217 "strip_size_kb": 64, 00:13:34.217 "state": "configuring", 00:13:34.217 "raid_level": "raid5f", 00:13:34.217 "superblock": true, 00:13:34.217 "num_base_bdevs": 3, 00:13:34.217 "num_base_bdevs_discovered": 2, 00:13:34.217 "num_base_bdevs_operational": 3, 00:13:34.217 "base_bdevs_list": [ 00:13:34.217 { 00:13:34.217 "name": "BaseBdev1", 00:13:34.217 "uuid": "d10c55bd-36d6-4aae-8662-0538f672eceb", 00:13:34.217 "is_configured": true, 00:13:34.217 "data_offset": 2048, 00:13:34.217 "data_size": 63488 00:13:34.217 }, 00:13:34.217 { 00:13:34.217 "name": null, 00:13:34.217 "uuid": "034614b9-86e1-49fe-a782-ebfb99c1d926", 00:13:34.217 "is_configured": false, 00:13:34.217 "data_offset": 0, 00:13:34.217 "data_size": 63488 00:13:34.217 }, 00:13:34.217 { 00:13:34.217 "name": "BaseBdev3", 00:13:34.217 "uuid": "bf84d5cb-df7d-429d-882f-e04bbcdd66fd", 00:13:34.217 "is_configured": true, 00:13:34.217 "data_offset": 2048, 00:13:34.217 "data_size": 63488 00:13:34.217 } 00:13:34.217 ] 00:13:34.217 }' 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.217 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.785 [2024-11-18 23:08:53.933406] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.785 "name": "Existed_Raid", 00:13:34.785 "uuid": "53c0ece0-8b70-4c03-9eab-20b3fc5b350a", 00:13:34.785 "strip_size_kb": 64, 00:13:34.785 "state": "configuring", 00:13:34.785 "raid_level": "raid5f", 00:13:34.785 "superblock": true, 00:13:34.785 "num_base_bdevs": 3, 00:13:34.785 "num_base_bdevs_discovered": 1, 00:13:34.785 "num_base_bdevs_operational": 3, 00:13:34.785 "base_bdevs_list": [ 00:13:34.785 { 00:13:34.785 "name": "BaseBdev1", 00:13:34.785 "uuid": "d10c55bd-36d6-4aae-8662-0538f672eceb", 00:13:34.785 "is_configured": true, 00:13:34.785 "data_offset": 2048, 00:13:34.785 "data_size": 63488 00:13:34.785 }, 00:13:34.785 { 00:13:34.785 "name": null, 00:13:34.785 "uuid": "034614b9-86e1-49fe-a782-ebfb99c1d926", 00:13:34.785 "is_configured": false, 00:13:34.785 "data_offset": 0, 00:13:34.785 "data_size": 63488 00:13:34.785 }, 00:13:34.785 { 00:13:34.785 "name": null, 00:13:34.785 "uuid": "bf84d5cb-df7d-429d-882f-e04bbcdd66fd", 00:13:34.785 "is_configured": false, 00:13:34.785 "data_offset": 0, 00:13:34.785 "data_size": 63488 00:13:34.785 } 00:13:34.785 ] 00:13:34.785 }' 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.785 23:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.045 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:35.045 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.045 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.045 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.304 [2024-11-18 23:08:54.432679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.304 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.305 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.305 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.305 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.305 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.305 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.305 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.305 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.305 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.305 "name": "Existed_Raid", 00:13:35.305 "uuid": "53c0ece0-8b70-4c03-9eab-20b3fc5b350a", 00:13:35.305 "strip_size_kb": 64, 00:13:35.305 "state": "configuring", 00:13:35.305 "raid_level": "raid5f", 00:13:35.305 "superblock": true, 00:13:35.305 "num_base_bdevs": 3, 00:13:35.305 "num_base_bdevs_discovered": 2, 00:13:35.305 "num_base_bdevs_operational": 3, 00:13:35.305 "base_bdevs_list": [ 00:13:35.305 { 00:13:35.305 "name": "BaseBdev1", 00:13:35.305 "uuid": "d10c55bd-36d6-4aae-8662-0538f672eceb", 00:13:35.305 "is_configured": true, 00:13:35.305 "data_offset": 2048, 00:13:35.305 "data_size": 63488 00:13:35.305 }, 00:13:35.305 { 00:13:35.305 "name": null, 00:13:35.305 "uuid": "034614b9-86e1-49fe-a782-ebfb99c1d926", 00:13:35.305 "is_configured": false, 00:13:35.305 "data_offset": 0, 00:13:35.305 "data_size": 63488 00:13:35.305 }, 00:13:35.305 { 00:13:35.305 "name": "BaseBdev3", 00:13:35.305 "uuid": "bf84d5cb-df7d-429d-882f-e04bbcdd66fd", 00:13:35.305 "is_configured": true, 00:13:35.305 "data_offset": 2048, 00:13:35.305 "data_size": 63488 00:13:35.305 } 00:13:35.305 ] 00:13:35.305 }' 00:13:35.305 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.305 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.565 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.565 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:35.565 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.565 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.565 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.825 [2024-11-18 23:08:54.963772] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.825 23:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.825 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.825 "name": "Existed_Raid", 00:13:35.825 "uuid": "53c0ece0-8b70-4c03-9eab-20b3fc5b350a", 00:13:35.825 "strip_size_kb": 64, 00:13:35.825 "state": "configuring", 00:13:35.825 "raid_level": "raid5f", 00:13:35.825 "superblock": true, 00:13:35.825 "num_base_bdevs": 3, 00:13:35.825 "num_base_bdevs_discovered": 1, 00:13:35.825 "num_base_bdevs_operational": 3, 00:13:35.825 "base_bdevs_list": [ 00:13:35.825 { 00:13:35.825 "name": null, 00:13:35.825 "uuid": "d10c55bd-36d6-4aae-8662-0538f672eceb", 00:13:35.825 "is_configured": false, 00:13:35.825 "data_offset": 0, 00:13:35.825 "data_size": 63488 00:13:35.825 }, 00:13:35.825 { 00:13:35.825 "name": null, 00:13:35.825 "uuid": "034614b9-86e1-49fe-a782-ebfb99c1d926", 00:13:35.825 "is_configured": false, 00:13:35.825 "data_offset": 0, 00:13:35.825 "data_size": 63488 00:13:35.825 }, 00:13:35.825 { 00:13:35.825 "name": "BaseBdev3", 00:13:35.825 "uuid": "bf84d5cb-df7d-429d-882f-e04bbcdd66fd", 00:13:35.825 "is_configured": true, 00:13:35.825 "data_offset": 2048, 00:13:35.825 "data_size": 63488 00:13:35.825 } 00:13:35.825 ] 00:13:35.825 }' 00:13:35.825 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.825 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.093 [2024-11-18 23:08:55.449512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.093 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.094 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.094 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.094 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.094 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.094 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.094 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.094 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.094 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.094 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.360 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.360 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.360 "name": "Existed_Raid", 00:13:36.360 "uuid": "53c0ece0-8b70-4c03-9eab-20b3fc5b350a", 00:13:36.360 "strip_size_kb": 64, 00:13:36.360 "state": "configuring", 00:13:36.360 "raid_level": "raid5f", 00:13:36.360 "superblock": true, 00:13:36.360 "num_base_bdevs": 3, 00:13:36.360 "num_base_bdevs_discovered": 2, 00:13:36.360 "num_base_bdevs_operational": 3, 00:13:36.360 "base_bdevs_list": [ 00:13:36.360 { 00:13:36.360 "name": null, 00:13:36.360 "uuid": "d10c55bd-36d6-4aae-8662-0538f672eceb", 00:13:36.360 "is_configured": false, 00:13:36.360 "data_offset": 0, 00:13:36.360 "data_size": 63488 00:13:36.360 }, 00:13:36.360 { 00:13:36.360 "name": "BaseBdev2", 00:13:36.360 "uuid": "034614b9-86e1-49fe-a782-ebfb99c1d926", 00:13:36.360 "is_configured": true, 00:13:36.360 "data_offset": 2048, 00:13:36.360 "data_size": 63488 00:13:36.360 }, 00:13:36.360 { 00:13:36.360 "name": "BaseBdev3", 00:13:36.360 "uuid": "bf84d5cb-df7d-429d-882f-e04bbcdd66fd", 00:13:36.360 "is_configured": true, 00:13:36.360 "data_offset": 2048, 00:13:36.360 "data_size": 63488 00:13:36.360 } 00:13:36.360 ] 00:13:36.360 }' 00:13:36.360 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.360 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.620 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.620 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:36.620 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.620 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.620 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.620 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:36.620 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.620 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.620 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.620 23:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:36.620 23:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d10c55bd-36d6-4aae-8662-0538f672eceb 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.879 [2024-11-18 23:08:56.027039] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:36.879 [2024-11-18 23:08:56.027193] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:36.879 [2024-11-18 23:08:56.027208] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:36.879 [2024-11-18 23:08:56.027502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:36.879 NewBaseBdev 00:13:36.879 [2024-11-18 23:08:56.027929] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:36.879 [2024-11-18 23:08:56.027943] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:36.879 [2024-11-18 23:08:56.028040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.879 [ 00:13:36.879 { 00:13:36.879 "name": "NewBaseBdev", 00:13:36.879 "aliases": [ 00:13:36.879 "d10c55bd-36d6-4aae-8662-0538f672eceb" 00:13:36.879 ], 00:13:36.879 "product_name": "Malloc disk", 00:13:36.879 "block_size": 512, 00:13:36.879 "num_blocks": 65536, 00:13:36.879 "uuid": "d10c55bd-36d6-4aae-8662-0538f672eceb", 00:13:36.879 "assigned_rate_limits": { 00:13:36.879 "rw_ios_per_sec": 0, 00:13:36.879 "rw_mbytes_per_sec": 0, 00:13:36.879 "r_mbytes_per_sec": 0, 00:13:36.879 "w_mbytes_per_sec": 0 00:13:36.879 }, 00:13:36.879 "claimed": true, 00:13:36.879 "claim_type": "exclusive_write", 00:13:36.879 "zoned": false, 00:13:36.879 "supported_io_types": { 00:13:36.879 "read": true, 00:13:36.879 "write": true, 00:13:36.879 "unmap": true, 00:13:36.879 "flush": true, 00:13:36.879 "reset": true, 00:13:36.879 "nvme_admin": false, 00:13:36.879 "nvme_io": false, 00:13:36.879 "nvme_io_md": false, 00:13:36.879 "write_zeroes": true, 00:13:36.879 "zcopy": true, 00:13:36.879 "get_zone_info": false, 00:13:36.879 "zone_management": false, 00:13:36.879 "zone_append": false, 00:13:36.879 "compare": false, 00:13:36.879 "compare_and_write": false, 00:13:36.879 "abort": true, 00:13:36.879 "seek_hole": false, 00:13:36.879 "seek_data": false, 00:13:36.879 "copy": true, 00:13:36.879 "nvme_iov_md": false 00:13:36.879 }, 00:13:36.879 "memory_domains": [ 00:13:36.879 { 00:13:36.879 "dma_device_id": "system", 00:13:36.879 "dma_device_type": 1 00:13:36.879 }, 00:13:36.879 { 00:13:36.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.879 "dma_device_type": 2 00:13:36.879 } 00:13:36.879 ], 00:13:36.879 "driver_specific": {} 00:13:36.879 } 00:13:36.879 ] 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.879 "name": "Existed_Raid", 00:13:36.879 "uuid": "53c0ece0-8b70-4c03-9eab-20b3fc5b350a", 00:13:36.879 "strip_size_kb": 64, 00:13:36.879 "state": "online", 00:13:36.879 "raid_level": "raid5f", 00:13:36.879 "superblock": true, 00:13:36.879 "num_base_bdevs": 3, 00:13:36.879 "num_base_bdevs_discovered": 3, 00:13:36.879 "num_base_bdevs_operational": 3, 00:13:36.879 "base_bdevs_list": [ 00:13:36.879 { 00:13:36.879 "name": "NewBaseBdev", 00:13:36.879 "uuid": "d10c55bd-36d6-4aae-8662-0538f672eceb", 00:13:36.879 "is_configured": true, 00:13:36.879 "data_offset": 2048, 00:13:36.879 "data_size": 63488 00:13:36.879 }, 00:13:36.879 { 00:13:36.879 "name": "BaseBdev2", 00:13:36.879 "uuid": "034614b9-86e1-49fe-a782-ebfb99c1d926", 00:13:36.879 "is_configured": true, 00:13:36.879 "data_offset": 2048, 00:13:36.879 "data_size": 63488 00:13:36.879 }, 00:13:36.879 { 00:13:36.879 "name": "BaseBdev3", 00:13:36.879 "uuid": "bf84d5cb-df7d-429d-882f-e04bbcdd66fd", 00:13:36.879 "is_configured": true, 00:13:36.879 "data_offset": 2048, 00:13:36.879 "data_size": 63488 00:13:36.879 } 00:13:36.879 ] 00:13:36.879 }' 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.879 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.139 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:37.139 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:37.139 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.139 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.139 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.139 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.139 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:37.139 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.139 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.139 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.400 [2024-11-18 23:08:56.518391] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.400 "name": "Existed_Raid", 00:13:37.400 "aliases": [ 00:13:37.400 "53c0ece0-8b70-4c03-9eab-20b3fc5b350a" 00:13:37.400 ], 00:13:37.400 "product_name": "Raid Volume", 00:13:37.400 "block_size": 512, 00:13:37.400 "num_blocks": 126976, 00:13:37.400 "uuid": "53c0ece0-8b70-4c03-9eab-20b3fc5b350a", 00:13:37.400 "assigned_rate_limits": { 00:13:37.400 "rw_ios_per_sec": 0, 00:13:37.400 "rw_mbytes_per_sec": 0, 00:13:37.400 "r_mbytes_per_sec": 0, 00:13:37.400 "w_mbytes_per_sec": 0 00:13:37.400 }, 00:13:37.400 "claimed": false, 00:13:37.400 "zoned": false, 00:13:37.400 "supported_io_types": { 00:13:37.400 "read": true, 00:13:37.400 "write": true, 00:13:37.400 "unmap": false, 00:13:37.400 "flush": false, 00:13:37.400 "reset": true, 00:13:37.400 "nvme_admin": false, 00:13:37.400 "nvme_io": false, 00:13:37.400 "nvme_io_md": false, 00:13:37.400 "write_zeroes": true, 00:13:37.400 "zcopy": false, 00:13:37.400 "get_zone_info": false, 00:13:37.400 "zone_management": false, 00:13:37.400 "zone_append": false, 00:13:37.400 "compare": false, 00:13:37.400 "compare_and_write": false, 00:13:37.400 "abort": false, 00:13:37.400 "seek_hole": false, 00:13:37.400 "seek_data": false, 00:13:37.400 "copy": false, 00:13:37.400 "nvme_iov_md": false 00:13:37.400 }, 00:13:37.400 "driver_specific": { 00:13:37.400 "raid": { 00:13:37.400 "uuid": "53c0ece0-8b70-4c03-9eab-20b3fc5b350a", 00:13:37.400 "strip_size_kb": 64, 00:13:37.400 "state": "online", 00:13:37.400 "raid_level": "raid5f", 00:13:37.400 "superblock": true, 00:13:37.400 "num_base_bdevs": 3, 00:13:37.400 "num_base_bdevs_discovered": 3, 00:13:37.400 "num_base_bdevs_operational": 3, 00:13:37.400 "base_bdevs_list": [ 00:13:37.400 { 00:13:37.400 "name": "NewBaseBdev", 00:13:37.400 "uuid": "d10c55bd-36d6-4aae-8662-0538f672eceb", 00:13:37.400 "is_configured": true, 00:13:37.400 "data_offset": 2048, 00:13:37.400 "data_size": 63488 00:13:37.400 }, 00:13:37.400 { 00:13:37.400 "name": "BaseBdev2", 00:13:37.400 "uuid": "034614b9-86e1-49fe-a782-ebfb99c1d926", 00:13:37.400 "is_configured": true, 00:13:37.400 "data_offset": 2048, 00:13:37.400 "data_size": 63488 00:13:37.400 }, 00:13:37.400 { 00:13:37.400 "name": "BaseBdev3", 00:13:37.400 "uuid": "bf84d5cb-df7d-429d-882f-e04bbcdd66fd", 00:13:37.400 "is_configured": true, 00:13:37.400 "data_offset": 2048, 00:13:37.400 "data_size": 63488 00:13:37.400 } 00:13:37.400 ] 00:13:37.400 } 00:13:37.400 } 00:13:37.400 }' 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:37.400 BaseBdev2 00:13:37.400 BaseBdev3' 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.400 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.661 [2024-11-18 23:08:56.809706] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:37.661 [2024-11-18 23:08:56.809774] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.661 [2024-11-18 23:08:56.809854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.661 [2024-11-18 23:08:56.810113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.661 [2024-11-18 23:08:56.810138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91013 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91013 ']' 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91013 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91013 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91013' 00:13:37.661 killing process with pid 91013 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91013 00:13:37.661 [2024-11-18 23:08:56.860851] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.661 23:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91013 00:13:37.661 [2024-11-18 23:08:56.891867] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.922 23:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:37.922 00:13:37.922 real 0m9.071s 00:13:37.922 user 0m15.426s 00:13:37.922 sys 0m2.021s 00:13:37.922 23:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.922 ************************************ 00:13:37.922 END TEST raid5f_state_function_test_sb 00:13:37.922 ************************************ 00:13:37.922 23:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.922 23:08:57 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:37.922 23:08:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:37.922 23:08:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.922 23:08:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.922 ************************************ 00:13:37.922 START TEST raid5f_superblock_test 00:13:37.922 ************************************ 00:13:37.922 23:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:13:37.922 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:37.922 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:37.922 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:37.922 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:37.922 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:37.922 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:37.922 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:37.922 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91618 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91618 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91618 ']' 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.923 23:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.183 [2024-11-18 23:08:57.326173] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:38.183 [2024-11-18 23:08:57.326344] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91618 ] 00:13:38.183 [2024-11-18 23:08:57.491792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.183 [2024-11-18 23:08:57.538625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.443 [2024-11-18 23:08:57.581934] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.443 [2024-11-18 23:08:57.581973] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.013 malloc1 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.013 [2024-11-18 23:08:58.180414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:39.013 [2024-11-18 23:08:58.180577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.013 [2024-11-18 23:08:58.180617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:39.013 [2024-11-18 23:08:58.180655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.013 [2024-11-18 23:08:58.182779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.013 [2024-11-18 23:08:58.182853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:39.013 pt1 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.013 malloc2 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.013 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.013 [2024-11-18 23:08:58.225955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:39.014 [2024-11-18 23:08:58.226169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.014 [2024-11-18 23:08:58.226250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:39.014 [2024-11-18 23:08:58.226408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.014 [2024-11-18 23:08:58.231436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.014 [2024-11-18 23:08:58.231593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:39.014 pt2 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.014 malloc3 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.014 [2024-11-18 23:08:58.261342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:39.014 [2024-11-18 23:08:58.261447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.014 [2024-11-18 23:08:58.261479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:39.014 [2024-11-18 23:08:58.261507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.014 [2024-11-18 23:08:58.263542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.014 [2024-11-18 23:08:58.263629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:39.014 pt3 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.014 [2024-11-18 23:08:58.273375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:39.014 [2024-11-18 23:08:58.275215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.014 [2024-11-18 23:08:58.275328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:39.014 [2024-11-18 23:08:58.275494] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:39.014 [2024-11-18 23:08:58.275506] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:39.014 [2024-11-18 23:08:58.275732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:39.014 [2024-11-18 23:08:58.276119] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:39.014 [2024-11-18 23:08:58.276134] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:39.014 [2024-11-18 23:08:58.276249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.014 "name": "raid_bdev1", 00:13:39.014 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:39.014 "strip_size_kb": 64, 00:13:39.014 "state": "online", 00:13:39.014 "raid_level": "raid5f", 00:13:39.014 "superblock": true, 00:13:39.014 "num_base_bdevs": 3, 00:13:39.014 "num_base_bdevs_discovered": 3, 00:13:39.014 "num_base_bdevs_operational": 3, 00:13:39.014 "base_bdevs_list": [ 00:13:39.014 { 00:13:39.014 "name": "pt1", 00:13:39.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.014 "is_configured": true, 00:13:39.014 "data_offset": 2048, 00:13:39.014 "data_size": 63488 00:13:39.014 }, 00:13:39.014 { 00:13:39.014 "name": "pt2", 00:13:39.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.014 "is_configured": true, 00:13:39.014 "data_offset": 2048, 00:13:39.014 "data_size": 63488 00:13:39.014 }, 00:13:39.014 { 00:13:39.014 "name": "pt3", 00:13:39.014 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.014 "is_configured": true, 00:13:39.014 "data_offset": 2048, 00:13:39.014 "data_size": 63488 00:13:39.014 } 00:13:39.014 ] 00:13:39.014 }' 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.014 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.584 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:39.584 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:39.584 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:39.584 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:39.584 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:39.584 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:39.584 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.584 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:39.584 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.584 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.585 [2024-11-18 23:08:58.728990] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:39.585 "name": "raid_bdev1", 00:13:39.585 "aliases": [ 00:13:39.585 "2721b20e-beea-430d-9025-8b132a23b974" 00:13:39.585 ], 00:13:39.585 "product_name": "Raid Volume", 00:13:39.585 "block_size": 512, 00:13:39.585 "num_blocks": 126976, 00:13:39.585 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:39.585 "assigned_rate_limits": { 00:13:39.585 "rw_ios_per_sec": 0, 00:13:39.585 "rw_mbytes_per_sec": 0, 00:13:39.585 "r_mbytes_per_sec": 0, 00:13:39.585 "w_mbytes_per_sec": 0 00:13:39.585 }, 00:13:39.585 "claimed": false, 00:13:39.585 "zoned": false, 00:13:39.585 "supported_io_types": { 00:13:39.585 "read": true, 00:13:39.585 "write": true, 00:13:39.585 "unmap": false, 00:13:39.585 "flush": false, 00:13:39.585 "reset": true, 00:13:39.585 "nvme_admin": false, 00:13:39.585 "nvme_io": false, 00:13:39.585 "nvme_io_md": false, 00:13:39.585 "write_zeroes": true, 00:13:39.585 "zcopy": false, 00:13:39.585 "get_zone_info": false, 00:13:39.585 "zone_management": false, 00:13:39.585 "zone_append": false, 00:13:39.585 "compare": false, 00:13:39.585 "compare_and_write": false, 00:13:39.585 "abort": false, 00:13:39.585 "seek_hole": false, 00:13:39.585 "seek_data": false, 00:13:39.585 "copy": false, 00:13:39.585 "nvme_iov_md": false 00:13:39.585 }, 00:13:39.585 "driver_specific": { 00:13:39.585 "raid": { 00:13:39.585 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:39.585 "strip_size_kb": 64, 00:13:39.585 "state": "online", 00:13:39.585 "raid_level": "raid5f", 00:13:39.585 "superblock": true, 00:13:39.585 "num_base_bdevs": 3, 00:13:39.585 "num_base_bdevs_discovered": 3, 00:13:39.585 "num_base_bdevs_operational": 3, 00:13:39.585 "base_bdevs_list": [ 00:13:39.585 { 00:13:39.585 "name": "pt1", 00:13:39.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.585 "is_configured": true, 00:13:39.585 "data_offset": 2048, 00:13:39.585 "data_size": 63488 00:13:39.585 }, 00:13:39.585 { 00:13:39.585 "name": "pt2", 00:13:39.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.585 "is_configured": true, 00:13:39.585 "data_offset": 2048, 00:13:39.585 "data_size": 63488 00:13:39.585 }, 00:13:39.585 { 00:13:39.585 "name": "pt3", 00:13:39.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.585 "is_configured": true, 00:13:39.585 "data_offset": 2048, 00:13:39.585 "data_size": 63488 00:13:39.585 } 00:13:39.585 ] 00:13:39.585 } 00:13:39.585 } 00:13:39.585 }' 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:39.585 pt2 00:13:39.585 pt3' 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.585 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.845 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.845 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.845 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.845 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:39.845 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.845 23:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.845 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.845 23:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.845 [2024-11-18 23:08:59.032450] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2721b20e-beea-430d-9025-8b132a23b974 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2721b20e-beea-430d-9025-8b132a23b974 ']' 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.845 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.845 [2024-11-18 23:08:59.076206] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.845 [2024-11-18 23:08:59.076276] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.845 [2024-11-18 23:08:59.076383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.846 [2024-11-18 23:08:59.076446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.846 [2024-11-18 23:08:59.076460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.846 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.846 [2024-11-18 23:08:59.219986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:40.105 [2024-11-18 23:08:59.221891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:40.105 [2024-11-18 23:08:59.221933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:40.105 [2024-11-18 23:08:59.221977] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:40.105 [2024-11-18 23:08:59.222014] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:40.105 [2024-11-18 23:08:59.222032] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:40.105 [2024-11-18 23:08:59.222044] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.105 [2024-11-18 23:08:59.222056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:13:40.105 request: 00:13:40.105 { 00:13:40.105 "name": "raid_bdev1", 00:13:40.105 "raid_level": "raid5f", 00:13:40.105 "base_bdevs": [ 00:13:40.105 "malloc1", 00:13:40.105 "malloc2", 00:13:40.105 "malloc3" 00:13:40.105 ], 00:13:40.105 "strip_size_kb": 64, 00:13:40.105 "superblock": false, 00:13:40.105 "method": "bdev_raid_create", 00:13:40.105 "req_id": 1 00:13:40.105 } 00:13:40.105 Got JSON-RPC error response 00:13:40.105 response: 00:13:40.105 { 00:13:40.105 "code": -17, 00:13:40.105 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:40.105 } 00:13:40.105 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:40.105 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:40.105 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.105 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.105 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.105 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:40.105 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.105 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.105 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.105 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.105 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.106 [2024-11-18 23:08:59.283843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:40.106 [2024-11-18 23:08:59.283935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.106 [2024-11-18 23:08:59.283965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:40.106 [2024-11-18 23:08:59.284013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.106 [2024-11-18 23:08:59.286034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.106 [2024-11-18 23:08:59.286121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:40.106 [2024-11-18 23:08:59.286195] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:40.106 [2024-11-18 23:08:59.286258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:40.106 pt1 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.106 "name": "raid_bdev1", 00:13:40.106 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:40.106 "strip_size_kb": 64, 00:13:40.106 "state": "configuring", 00:13:40.106 "raid_level": "raid5f", 00:13:40.106 "superblock": true, 00:13:40.106 "num_base_bdevs": 3, 00:13:40.106 "num_base_bdevs_discovered": 1, 00:13:40.106 "num_base_bdevs_operational": 3, 00:13:40.106 "base_bdevs_list": [ 00:13:40.106 { 00:13:40.106 "name": "pt1", 00:13:40.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.106 "is_configured": true, 00:13:40.106 "data_offset": 2048, 00:13:40.106 "data_size": 63488 00:13:40.106 }, 00:13:40.106 { 00:13:40.106 "name": null, 00:13:40.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.106 "is_configured": false, 00:13:40.106 "data_offset": 2048, 00:13:40.106 "data_size": 63488 00:13:40.106 }, 00:13:40.106 { 00:13:40.106 "name": null, 00:13:40.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.106 "is_configured": false, 00:13:40.106 "data_offset": 2048, 00:13:40.106 "data_size": 63488 00:13:40.106 } 00:13:40.106 ] 00:13:40.106 }' 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.106 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.365 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:40.365 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:40.365 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.366 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.366 [2024-11-18 23:08:59.723312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:40.366 [2024-11-18 23:08:59.723427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.366 [2024-11-18 23:08:59.723460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:40.366 [2024-11-18 23:08:59.723491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.366 [2024-11-18 23:08:59.723821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.366 [2024-11-18 23:08:59.723880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:40.366 [2024-11-18 23:08:59.723960] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:40.366 [2024-11-18 23:08:59.724010] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:40.366 pt2 00:13:40.366 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.366 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:40.366 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.366 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.366 [2024-11-18 23:08:59.735304] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.625 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.625 "name": "raid_bdev1", 00:13:40.626 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:40.626 "strip_size_kb": 64, 00:13:40.626 "state": "configuring", 00:13:40.626 "raid_level": "raid5f", 00:13:40.626 "superblock": true, 00:13:40.626 "num_base_bdevs": 3, 00:13:40.626 "num_base_bdevs_discovered": 1, 00:13:40.626 "num_base_bdevs_operational": 3, 00:13:40.626 "base_bdevs_list": [ 00:13:40.626 { 00:13:40.626 "name": "pt1", 00:13:40.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.626 "is_configured": true, 00:13:40.626 "data_offset": 2048, 00:13:40.626 "data_size": 63488 00:13:40.626 }, 00:13:40.626 { 00:13:40.626 "name": null, 00:13:40.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.626 "is_configured": false, 00:13:40.626 "data_offset": 0, 00:13:40.626 "data_size": 63488 00:13:40.626 }, 00:13:40.626 { 00:13:40.626 "name": null, 00:13:40.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.626 "is_configured": false, 00:13:40.626 "data_offset": 2048, 00:13:40.626 "data_size": 63488 00:13:40.626 } 00:13:40.626 ] 00:13:40.626 }' 00:13:40.626 23:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.626 23:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.886 [2024-11-18 23:09:00.230399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:40.886 [2024-11-18 23:09:00.230509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.886 [2024-11-18 23:09:00.230541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:40.886 [2024-11-18 23:09:00.230568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.886 [2024-11-18 23:09:00.230918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.886 [2024-11-18 23:09:00.230974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:40.886 [2024-11-18 23:09:00.231060] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:40.886 [2024-11-18 23:09:00.231104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:40.886 pt2 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.886 [2024-11-18 23:09:00.242382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:40.886 [2024-11-18 23:09:00.242460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.886 [2024-11-18 23:09:00.242490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:40.886 [2024-11-18 23:09:00.242515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.886 [2024-11-18 23:09:00.242831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.886 [2024-11-18 23:09:00.242888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:40.886 [2024-11-18 23:09:00.242964] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:40.886 [2024-11-18 23:09:00.243007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:40.886 [2024-11-18 23:09:00.243119] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:40.886 [2024-11-18 23:09:00.243155] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:40.886 [2024-11-18 23:09:00.243424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:40.886 [2024-11-18 23:09:00.243842] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:40.886 [2024-11-18 23:09:00.243894] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:13:40.886 [2024-11-18 23:09:00.244020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.886 pt3 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:40.886 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.887 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.147 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.147 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.147 "name": "raid_bdev1", 00:13:41.147 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:41.147 "strip_size_kb": 64, 00:13:41.147 "state": "online", 00:13:41.147 "raid_level": "raid5f", 00:13:41.147 "superblock": true, 00:13:41.147 "num_base_bdevs": 3, 00:13:41.147 "num_base_bdevs_discovered": 3, 00:13:41.147 "num_base_bdevs_operational": 3, 00:13:41.147 "base_bdevs_list": [ 00:13:41.147 { 00:13:41.147 "name": "pt1", 00:13:41.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.147 "is_configured": true, 00:13:41.147 "data_offset": 2048, 00:13:41.147 "data_size": 63488 00:13:41.147 }, 00:13:41.147 { 00:13:41.147 "name": "pt2", 00:13:41.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.147 "is_configured": true, 00:13:41.147 "data_offset": 2048, 00:13:41.147 "data_size": 63488 00:13:41.147 }, 00:13:41.147 { 00:13:41.147 "name": "pt3", 00:13:41.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.147 "is_configured": true, 00:13:41.147 "data_offset": 2048, 00:13:41.147 "data_size": 63488 00:13:41.147 } 00:13:41.147 ] 00:13:41.147 }' 00:13:41.147 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.147 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.407 [2024-11-18 23:09:00.697742] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.407 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:41.407 "name": "raid_bdev1", 00:13:41.407 "aliases": [ 00:13:41.407 "2721b20e-beea-430d-9025-8b132a23b974" 00:13:41.407 ], 00:13:41.407 "product_name": "Raid Volume", 00:13:41.407 "block_size": 512, 00:13:41.407 "num_blocks": 126976, 00:13:41.407 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:41.407 "assigned_rate_limits": { 00:13:41.407 "rw_ios_per_sec": 0, 00:13:41.407 "rw_mbytes_per_sec": 0, 00:13:41.407 "r_mbytes_per_sec": 0, 00:13:41.407 "w_mbytes_per_sec": 0 00:13:41.407 }, 00:13:41.407 "claimed": false, 00:13:41.407 "zoned": false, 00:13:41.407 "supported_io_types": { 00:13:41.407 "read": true, 00:13:41.407 "write": true, 00:13:41.407 "unmap": false, 00:13:41.407 "flush": false, 00:13:41.407 "reset": true, 00:13:41.407 "nvme_admin": false, 00:13:41.407 "nvme_io": false, 00:13:41.407 "nvme_io_md": false, 00:13:41.407 "write_zeroes": true, 00:13:41.407 "zcopy": false, 00:13:41.407 "get_zone_info": false, 00:13:41.407 "zone_management": false, 00:13:41.407 "zone_append": false, 00:13:41.407 "compare": false, 00:13:41.407 "compare_and_write": false, 00:13:41.407 "abort": false, 00:13:41.407 "seek_hole": false, 00:13:41.407 "seek_data": false, 00:13:41.407 "copy": false, 00:13:41.407 "nvme_iov_md": false 00:13:41.407 }, 00:13:41.407 "driver_specific": { 00:13:41.408 "raid": { 00:13:41.408 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:41.408 "strip_size_kb": 64, 00:13:41.408 "state": "online", 00:13:41.408 "raid_level": "raid5f", 00:13:41.408 "superblock": true, 00:13:41.408 "num_base_bdevs": 3, 00:13:41.408 "num_base_bdevs_discovered": 3, 00:13:41.408 "num_base_bdevs_operational": 3, 00:13:41.408 "base_bdevs_list": [ 00:13:41.408 { 00:13:41.408 "name": "pt1", 00:13:41.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.408 "is_configured": true, 00:13:41.408 "data_offset": 2048, 00:13:41.408 "data_size": 63488 00:13:41.408 }, 00:13:41.408 { 00:13:41.408 "name": "pt2", 00:13:41.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.408 "is_configured": true, 00:13:41.408 "data_offset": 2048, 00:13:41.408 "data_size": 63488 00:13:41.408 }, 00:13:41.408 { 00:13:41.408 "name": "pt3", 00:13:41.408 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.408 "is_configured": true, 00:13:41.408 "data_offset": 2048, 00:13:41.408 "data_size": 63488 00:13:41.408 } 00:13:41.408 ] 00:13:41.408 } 00:13:41.408 } 00:13:41.408 }' 00:13:41.408 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:41.668 pt2 00:13:41.668 pt3' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.668 [2024-11-18 23:09:00.965270] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2721b20e-beea-430d-9025-8b132a23b974 '!=' 2721b20e-beea-430d-9025-8b132a23b974 ']' 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:41.668 23:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.668 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.668 [2024-11-18 23:09:01.005092] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:41.668 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.668 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:41.668 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.668 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.668 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.668 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.668 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.668 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.668 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.668 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.669 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.669 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.669 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.669 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.669 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.669 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.936 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.936 "name": "raid_bdev1", 00:13:41.936 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:41.936 "strip_size_kb": 64, 00:13:41.936 "state": "online", 00:13:41.936 "raid_level": "raid5f", 00:13:41.936 "superblock": true, 00:13:41.936 "num_base_bdevs": 3, 00:13:41.936 "num_base_bdevs_discovered": 2, 00:13:41.936 "num_base_bdevs_operational": 2, 00:13:41.936 "base_bdevs_list": [ 00:13:41.936 { 00:13:41.936 "name": null, 00:13:41.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.936 "is_configured": false, 00:13:41.936 "data_offset": 0, 00:13:41.936 "data_size": 63488 00:13:41.936 }, 00:13:41.936 { 00:13:41.936 "name": "pt2", 00:13:41.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.936 "is_configured": true, 00:13:41.936 "data_offset": 2048, 00:13:41.936 "data_size": 63488 00:13:41.936 }, 00:13:41.936 { 00:13:41.936 "name": "pt3", 00:13:41.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.936 "is_configured": true, 00:13:41.936 "data_offset": 2048, 00:13:41.936 "data_size": 63488 00:13:41.936 } 00:13:41.936 ] 00:13:41.936 }' 00:13:41.936 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.936 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.199 [2024-11-18 23:09:01.480249] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.199 [2024-11-18 23:09:01.480337] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.199 [2024-11-18 23:09:01.480421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.199 [2024-11-18 23:09:01.480489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.199 [2024-11-18 23:09:01.480544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.199 [2024-11-18 23:09:01.548140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:42.199 [2024-11-18 23:09:01.548232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.199 [2024-11-18 23:09:01.548253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:42.199 [2024-11-18 23:09:01.548261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.199 [2024-11-18 23:09:01.550417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.199 [2024-11-18 23:09:01.550498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:42.199 [2024-11-18 23:09:01.550574] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:42.199 [2024-11-18 23:09:01.550620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:42.199 pt2 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.199 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.459 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.459 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.459 "name": "raid_bdev1", 00:13:42.459 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:42.459 "strip_size_kb": 64, 00:13:42.459 "state": "configuring", 00:13:42.459 "raid_level": "raid5f", 00:13:42.459 "superblock": true, 00:13:42.459 "num_base_bdevs": 3, 00:13:42.459 "num_base_bdevs_discovered": 1, 00:13:42.459 "num_base_bdevs_operational": 2, 00:13:42.459 "base_bdevs_list": [ 00:13:42.459 { 00:13:42.459 "name": null, 00:13:42.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.459 "is_configured": false, 00:13:42.459 "data_offset": 2048, 00:13:42.459 "data_size": 63488 00:13:42.459 }, 00:13:42.459 { 00:13:42.459 "name": "pt2", 00:13:42.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.459 "is_configured": true, 00:13:42.459 "data_offset": 2048, 00:13:42.459 "data_size": 63488 00:13:42.459 }, 00:13:42.459 { 00:13:42.459 "name": null, 00:13:42.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.459 "is_configured": false, 00:13:42.459 "data_offset": 2048, 00:13:42.459 "data_size": 63488 00:13:42.459 } 00:13:42.459 ] 00:13:42.459 }' 00:13:42.459 23:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.459 23:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.720 [2024-11-18 23:09:02.015412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:42.720 [2024-11-18 23:09:02.015520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.720 [2024-11-18 23:09:02.015553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:42.720 [2024-11-18 23:09:02.015579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.720 [2024-11-18 23:09:02.015904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.720 [2024-11-18 23:09:02.015960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:42.720 [2024-11-18 23:09:02.016038] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:42.720 [2024-11-18 23:09:02.016064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:42.720 [2024-11-18 23:09:02.016146] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:42.720 [2024-11-18 23:09:02.016155] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:42.720 [2024-11-18 23:09:02.016389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:42.720 [2024-11-18 23:09:02.016826] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:42.720 [2024-11-18 23:09:02.016849] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:13:42.720 [2024-11-18 23:09:02.017050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.720 pt3 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.720 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.720 "name": "raid_bdev1", 00:13:42.720 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:42.720 "strip_size_kb": 64, 00:13:42.720 "state": "online", 00:13:42.720 "raid_level": "raid5f", 00:13:42.720 "superblock": true, 00:13:42.720 "num_base_bdevs": 3, 00:13:42.720 "num_base_bdevs_discovered": 2, 00:13:42.720 "num_base_bdevs_operational": 2, 00:13:42.720 "base_bdevs_list": [ 00:13:42.720 { 00:13:42.720 "name": null, 00:13:42.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.720 "is_configured": false, 00:13:42.720 "data_offset": 2048, 00:13:42.720 "data_size": 63488 00:13:42.720 }, 00:13:42.720 { 00:13:42.720 "name": "pt2", 00:13:42.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.720 "is_configured": true, 00:13:42.720 "data_offset": 2048, 00:13:42.720 "data_size": 63488 00:13:42.720 }, 00:13:42.720 { 00:13:42.720 "name": "pt3", 00:13:42.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.721 "is_configured": true, 00:13:42.721 "data_offset": 2048, 00:13:42.721 "data_size": 63488 00:13:42.721 } 00:13:42.721 ] 00:13:42.721 }' 00:13:42.721 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.721 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.291 [2024-11-18 23:09:02.486990] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.291 [2024-11-18 23:09:02.487070] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.291 [2024-11-18 23:09:02.487158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.291 [2024-11-18 23:09:02.487220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.291 [2024-11-18 23:09:02.487253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.291 [2024-11-18 23:09:02.562856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:43.291 [2024-11-18 23:09:02.562950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.291 [2024-11-18 23:09:02.562980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:43.291 [2024-11-18 23:09:02.563006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.291 [2024-11-18 23:09:02.565119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.291 [2024-11-18 23:09:02.565195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:43.291 [2024-11-18 23:09:02.565271] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:43.291 [2024-11-18 23:09:02.565356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:43.291 [2024-11-18 23:09:02.565477] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:43.291 [2024-11-18 23:09:02.565539] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.291 [2024-11-18 23:09:02.565580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:13:43.291 [2024-11-18 23:09:02.565656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:43.291 pt1 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.291 "name": "raid_bdev1", 00:13:43.291 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:43.291 "strip_size_kb": 64, 00:13:43.291 "state": "configuring", 00:13:43.291 "raid_level": "raid5f", 00:13:43.291 "superblock": true, 00:13:43.291 "num_base_bdevs": 3, 00:13:43.291 "num_base_bdevs_discovered": 1, 00:13:43.291 "num_base_bdevs_operational": 2, 00:13:43.291 "base_bdevs_list": [ 00:13:43.291 { 00:13:43.291 "name": null, 00:13:43.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.291 "is_configured": false, 00:13:43.291 "data_offset": 2048, 00:13:43.291 "data_size": 63488 00:13:43.291 }, 00:13:43.291 { 00:13:43.291 "name": "pt2", 00:13:43.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.291 "is_configured": true, 00:13:43.291 "data_offset": 2048, 00:13:43.291 "data_size": 63488 00:13:43.291 }, 00:13:43.291 { 00:13:43.291 "name": null, 00:13:43.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.291 "is_configured": false, 00:13:43.291 "data_offset": 2048, 00:13:43.291 "data_size": 63488 00:13:43.291 } 00:13:43.291 ] 00:13:43.291 }' 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.291 23:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.861 [2024-11-18 23:09:03.089937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:43.861 [2024-11-18 23:09:03.089985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.861 [2024-11-18 23:09:03.089999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:43.861 [2024-11-18 23:09:03.090009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.861 [2024-11-18 23:09:03.090328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.861 [2024-11-18 23:09:03.090351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:43.861 [2024-11-18 23:09:03.090402] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:43.861 [2024-11-18 23:09:03.090421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:43.861 [2024-11-18 23:09:03.090487] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:43.861 [2024-11-18 23:09:03.090498] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:43.861 [2024-11-18 23:09:03.090692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:43.861 [2024-11-18 23:09:03.091096] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:43.861 [2024-11-18 23:09:03.091107] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:43.861 [2024-11-18 23:09:03.091262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.861 pt3 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.861 "name": "raid_bdev1", 00:13:43.861 "uuid": "2721b20e-beea-430d-9025-8b132a23b974", 00:13:43.861 "strip_size_kb": 64, 00:13:43.861 "state": "online", 00:13:43.861 "raid_level": "raid5f", 00:13:43.861 "superblock": true, 00:13:43.861 "num_base_bdevs": 3, 00:13:43.861 "num_base_bdevs_discovered": 2, 00:13:43.861 "num_base_bdevs_operational": 2, 00:13:43.861 "base_bdevs_list": [ 00:13:43.861 { 00:13:43.861 "name": null, 00:13:43.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.861 "is_configured": false, 00:13:43.861 "data_offset": 2048, 00:13:43.861 "data_size": 63488 00:13:43.861 }, 00:13:43.861 { 00:13:43.861 "name": "pt2", 00:13:43.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.861 "is_configured": true, 00:13:43.861 "data_offset": 2048, 00:13:43.861 "data_size": 63488 00:13:43.861 }, 00:13:43.861 { 00:13:43.861 "name": "pt3", 00:13:43.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.861 "is_configured": true, 00:13:43.861 "data_offset": 2048, 00:13:43.861 "data_size": 63488 00:13:43.861 } 00:13:43.861 ] 00:13:43.861 }' 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.861 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.432 [2024-11-18 23:09:03.641171] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2721b20e-beea-430d-9025-8b132a23b974 '!=' 2721b20e-beea-430d-9025-8b132a23b974 ']' 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91618 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91618 ']' 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91618 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91618 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:44.432 killing process with pid 91618 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91618' 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91618 00:13:44.432 [2024-11-18 23:09:03.724480] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.432 [2024-11-18 23:09:03.724543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.432 [2024-11-18 23:09:03.724594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.432 [2024-11-18 23:09:03.724603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:44.432 23:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91618 00:13:44.432 [2024-11-18 23:09:03.757675] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.692 23:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:44.692 ************************************ 00:13:44.692 END TEST raid5f_superblock_test 00:13:44.692 ************************************ 00:13:44.692 00:13:44.692 real 0m6.780s 00:13:44.692 user 0m11.290s 00:13:44.692 sys 0m1.545s 00:13:44.692 23:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.692 23:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.956 23:09:04 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:44.956 23:09:04 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:44.956 23:09:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:44.956 23:09:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.956 23:09:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.956 ************************************ 00:13:44.956 START TEST raid5f_rebuild_test 00:13:44.956 ************************************ 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92051 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92051 00:13:44.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92051 ']' 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.956 23:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.957 23:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.957 23:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.957 23:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.957 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:44.957 Zero copy mechanism will not be used. 00:13:44.957 [2024-11-18 23:09:04.202473] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:44.957 [2024-11-18 23:09:04.202636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92051 ] 00:13:45.231 [2024-11-18 23:09:04.369861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.231 [2024-11-18 23:09:04.417400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.231 [2024-11-18 23:09:04.460458] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.231 [2024-11-18 23:09:04.460496] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.839 BaseBdev1_malloc 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.839 [2024-11-18 23:09:05.035023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:45.839 [2024-11-18 23:09:05.035092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.839 [2024-11-18 23:09:05.035117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:45.839 [2024-11-18 23:09:05.035131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.839 [2024-11-18 23:09:05.037266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.839 [2024-11-18 23:09:05.037312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:45.839 BaseBdev1 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.839 BaseBdev2_malloc 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.839 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.840 [2024-11-18 23:09:05.080387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:45.840 [2024-11-18 23:09:05.080512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.840 [2024-11-18 23:09:05.080568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:45.840 [2024-11-18 23:09:05.080596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.840 [2024-11-18 23:09:05.085174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.840 [2024-11-18 23:09:05.085239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:45.840 BaseBdev2 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.840 BaseBdev3_malloc 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.840 [2024-11-18 23:09:05.111437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:45.840 [2024-11-18 23:09:05.111487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.840 [2024-11-18 23:09:05.111512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:45.840 [2024-11-18 23:09:05.111521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.840 [2024-11-18 23:09:05.113577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.840 [2024-11-18 23:09:05.113611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:45.840 BaseBdev3 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.840 spare_malloc 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.840 spare_delay 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.840 [2024-11-18 23:09:05.151844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.840 [2024-11-18 23:09:05.151888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.840 [2024-11-18 23:09:05.151910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:45.840 [2024-11-18 23:09:05.151918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.840 [2024-11-18 23:09:05.153885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.840 [2024-11-18 23:09:05.153921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.840 spare 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.840 [2024-11-18 23:09:05.163887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.840 [2024-11-18 23:09:05.165687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.840 [2024-11-18 23:09:05.165747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.840 [2024-11-18 23:09:05.165820] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:45.840 [2024-11-18 23:09:05.165829] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:45.840 [2024-11-18 23:09:05.166041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:45.840 [2024-11-18 23:09:05.166428] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:45.840 [2024-11-18 23:09:05.166440] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:45.840 [2024-11-18 23:09:05.166548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.840 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.100 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.100 "name": "raid_bdev1", 00:13:46.100 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:46.100 "strip_size_kb": 64, 00:13:46.100 "state": "online", 00:13:46.100 "raid_level": "raid5f", 00:13:46.100 "superblock": false, 00:13:46.100 "num_base_bdevs": 3, 00:13:46.100 "num_base_bdevs_discovered": 3, 00:13:46.100 "num_base_bdevs_operational": 3, 00:13:46.100 "base_bdevs_list": [ 00:13:46.100 { 00:13:46.100 "name": "BaseBdev1", 00:13:46.100 "uuid": "daab94d0-e2cb-544c-8938-3e9099a7fb6d", 00:13:46.100 "is_configured": true, 00:13:46.100 "data_offset": 0, 00:13:46.100 "data_size": 65536 00:13:46.100 }, 00:13:46.100 { 00:13:46.100 "name": "BaseBdev2", 00:13:46.100 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:46.100 "is_configured": true, 00:13:46.100 "data_offset": 0, 00:13:46.100 "data_size": 65536 00:13:46.100 }, 00:13:46.100 { 00:13:46.100 "name": "BaseBdev3", 00:13:46.100 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:46.100 "is_configured": true, 00:13:46.100 "data_offset": 0, 00:13:46.100 "data_size": 65536 00:13:46.100 } 00:13:46.100 ] 00:13:46.100 }' 00:13:46.100 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.100 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.360 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:46.360 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:46.360 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.360 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.360 [2024-11-18 23:09:05.671493] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.360 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.360 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:46.360 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.360 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:46.360 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.360 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.360 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:46.620 [2024-11-18 23:09:05.934901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:46.620 /dev/nbd0 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:46.620 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.620 1+0 records in 00:13:46.620 1+0 records out 00:13:46.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352325 s, 11.6 MB/s 00:13:46.879 23:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.879 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:46.879 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.879 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:46.879 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:46.880 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.880 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.880 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:46.880 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:46.880 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:46.880 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:47.148 512+0 records in 00:13:47.148 512+0 records out 00:13:47.148 67108864 bytes (67 MB, 64 MiB) copied, 0.285308 s, 235 MB/s 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:47.148 [2024-11-18 23:09:06.501352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.148 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.437 [2024-11-18 23:09:06.526632] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.437 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.437 "name": "raid_bdev1", 00:13:47.437 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:47.437 "strip_size_kb": 64, 00:13:47.437 "state": "online", 00:13:47.437 "raid_level": "raid5f", 00:13:47.437 "superblock": false, 00:13:47.437 "num_base_bdevs": 3, 00:13:47.437 "num_base_bdevs_discovered": 2, 00:13:47.437 "num_base_bdevs_operational": 2, 00:13:47.437 "base_bdevs_list": [ 00:13:47.437 { 00:13:47.437 "name": null, 00:13:47.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.437 "is_configured": false, 00:13:47.437 "data_offset": 0, 00:13:47.437 "data_size": 65536 00:13:47.437 }, 00:13:47.437 { 00:13:47.437 "name": "BaseBdev2", 00:13:47.437 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:47.437 "is_configured": true, 00:13:47.437 "data_offset": 0, 00:13:47.437 "data_size": 65536 00:13:47.437 }, 00:13:47.438 { 00:13:47.438 "name": "BaseBdev3", 00:13:47.438 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:47.438 "is_configured": true, 00:13:47.438 "data_offset": 0, 00:13:47.438 "data_size": 65536 00:13:47.438 } 00:13:47.438 ] 00:13:47.438 }' 00:13:47.438 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.438 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.698 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:47.698 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.698 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.698 [2024-11-18 23:09:06.961916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.698 [2024-11-18 23:09:06.965724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:13:47.698 [2024-11-18 23:09:06.967875] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.698 23:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.698 23:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:48.636 23:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.636 23:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.636 23:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.636 23:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.636 23:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.636 23:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.636 23:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.636 23:09:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.636 23:09:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.636 23:09:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.896 "name": "raid_bdev1", 00:13:48.896 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:48.896 "strip_size_kb": 64, 00:13:48.896 "state": "online", 00:13:48.896 "raid_level": "raid5f", 00:13:48.896 "superblock": false, 00:13:48.896 "num_base_bdevs": 3, 00:13:48.896 "num_base_bdevs_discovered": 3, 00:13:48.896 "num_base_bdevs_operational": 3, 00:13:48.896 "process": { 00:13:48.896 "type": "rebuild", 00:13:48.896 "target": "spare", 00:13:48.896 "progress": { 00:13:48.896 "blocks": 20480, 00:13:48.896 "percent": 15 00:13:48.896 } 00:13:48.896 }, 00:13:48.896 "base_bdevs_list": [ 00:13:48.896 { 00:13:48.896 "name": "spare", 00:13:48.896 "uuid": "c93e4ca1-4a7e-5e5b-b97b-6a1d72885d1a", 00:13:48.896 "is_configured": true, 00:13:48.896 "data_offset": 0, 00:13:48.896 "data_size": 65536 00:13:48.896 }, 00:13:48.896 { 00:13:48.896 "name": "BaseBdev2", 00:13:48.896 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:48.896 "is_configured": true, 00:13:48.896 "data_offset": 0, 00:13:48.896 "data_size": 65536 00:13:48.896 }, 00:13:48.896 { 00:13:48.896 "name": "BaseBdev3", 00:13:48.896 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:48.896 "is_configured": true, 00:13:48.896 "data_offset": 0, 00:13:48.896 "data_size": 65536 00:13:48.896 } 00:13:48.896 ] 00:13:48.896 }' 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.896 [2024-11-18 23:09:08.114590] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.896 [2024-11-18 23:09:08.174491] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:48.896 [2024-11-18 23:09:08.174595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.896 [2024-11-18 23:09:08.174610] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.896 [2024-11-18 23:09:08.174620] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.896 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.897 "name": "raid_bdev1", 00:13:48.897 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:48.897 "strip_size_kb": 64, 00:13:48.897 "state": "online", 00:13:48.897 "raid_level": "raid5f", 00:13:48.897 "superblock": false, 00:13:48.897 "num_base_bdevs": 3, 00:13:48.897 "num_base_bdevs_discovered": 2, 00:13:48.897 "num_base_bdevs_operational": 2, 00:13:48.897 "base_bdevs_list": [ 00:13:48.897 { 00:13:48.897 "name": null, 00:13:48.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.897 "is_configured": false, 00:13:48.897 "data_offset": 0, 00:13:48.897 "data_size": 65536 00:13:48.897 }, 00:13:48.897 { 00:13:48.897 "name": "BaseBdev2", 00:13:48.897 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:48.897 "is_configured": true, 00:13:48.897 "data_offset": 0, 00:13:48.897 "data_size": 65536 00:13:48.897 }, 00:13:48.897 { 00:13:48.897 "name": "BaseBdev3", 00:13:48.897 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:48.897 "is_configured": true, 00:13:48.897 "data_offset": 0, 00:13:48.897 "data_size": 65536 00:13:48.897 } 00:13:48.897 ] 00:13:48.897 }' 00:13:48.897 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.897 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.465 "name": "raid_bdev1", 00:13:49.465 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:49.465 "strip_size_kb": 64, 00:13:49.465 "state": "online", 00:13:49.465 "raid_level": "raid5f", 00:13:49.465 "superblock": false, 00:13:49.465 "num_base_bdevs": 3, 00:13:49.465 "num_base_bdevs_discovered": 2, 00:13:49.465 "num_base_bdevs_operational": 2, 00:13:49.465 "base_bdevs_list": [ 00:13:49.465 { 00:13:49.465 "name": null, 00:13:49.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.465 "is_configured": false, 00:13:49.465 "data_offset": 0, 00:13:49.465 "data_size": 65536 00:13:49.465 }, 00:13:49.465 { 00:13:49.465 "name": "BaseBdev2", 00:13:49.465 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:49.465 "is_configured": true, 00:13:49.465 "data_offset": 0, 00:13:49.465 "data_size": 65536 00:13:49.465 }, 00:13:49.465 { 00:13:49.465 "name": "BaseBdev3", 00:13:49.465 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:49.465 "is_configured": true, 00:13:49.465 "data_offset": 0, 00:13:49.465 "data_size": 65536 00:13:49.465 } 00:13:49.465 ] 00:13:49.465 }' 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.465 [2024-11-18 23:09:08.779007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.465 [2024-11-18 23:09:08.782093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:13:49.465 [2024-11-18 23:09:08.784356] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.465 23:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.851 "name": "raid_bdev1", 00:13:50.851 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:50.851 "strip_size_kb": 64, 00:13:50.851 "state": "online", 00:13:50.851 "raid_level": "raid5f", 00:13:50.851 "superblock": false, 00:13:50.851 "num_base_bdevs": 3, 00:13:50.851 "num_base_bdevs_discovered": 3, 00:13:50.851 "num_base_bdevs_operational": 3, 00:13:50.851 "process": { 00:13:50.851 "type": "rebuild", 00:13:50.851 "target": "spare", 00:13:50.851 "progress": { 00:13:50.851 "blocks": 20480, 00:13:50.851 "percent": 15 00:13:50.851 } 00:13:50.851 }, 00:13:50.851 "base_bdevs_list": [ 00:13:50.851 { 00:13:50.851 "name": "spare", 00:13:50.851 "uuid": "c93e4ca1-4a7e-5e5b-b97b-6a1d72885d1a", 00:13:50.851 "is_configured": true, 00:13:50.851 "data_offset": 0, 00:13:50.851 "data_size": 65536 00:13:50.851 }, 00:13:50.851 { 00:13:50.851 "name": "BaseBdev2", 00:13:50.851 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:50.851 "is_configured": true, 00:13:50.851 "data_offset": 0, 00:13:50.851 "data_size": 65536 00:13:50.851 }, 00:13:50.851 { 00:13:50.851 "name": "BaseBdev3", 00:13:50.851 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:50.851 "is_configured": true, 00:13:50.851 "data_offset": 0, 00:13:50.851 "data_size": 65536 00:13:50.851 } 00:13:50.851 ] 00:13:50.851 }' 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=445 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.851 "name": "raid_bdev1", 00:13:50.851 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:50.851 "strip_size_kb": 64, 00:13:50.851 "state": "online", 00:13:50.851 "raid_level": "raid5f", 00:13:50.851 "superblock": false, 00:13:50.851 "num_base_bdevs": 3, 00:13:50.851 "num_base_bdevs_discovered": 3, 00:13:50.851 "num_base_bdevs_operational": 3, 00:13:50.851 "process": { 00:13:50.851 "type": "rebuild", 00:13:50.851 "target": "spare", 00:13:50.851 "progress": { 00:13:50.851 "blocks": 22528, 00:13:50.851 "percent": 17 00:13:50.851 } 00:13:50.851 }, 00:13:50.851 "base_bdevs_list": [ 00:13:50.851 { 00:13:50.851 "name": "spare", 00:13:50.851 "uuid": "c93e4ca1-4a7e-5e5b-b97b-6a1d72885d1a", 00:13:50.851 "is_configured": true, 00:13:50.851 "data_offset": 0, 00:13:50.851 "data_size": 65536 00:13:50.851 }, 00:13:50.851 { 00:13:50.851 "name": "BaseBdev2", 00:13:50.851 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:50.851 "is_configured": true, 00:13:50.851 "data_offset": 0, 00:13:50.851 "data_size": 65536 00:13:50.851 }, 00:13:50.851 { 00:13:50.851 "name": "BaseBdev3", 00:13:50.851 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:50.851 "is_configured": true, 00:13:50.851 "data_offset": 0, 00:13:50.851 "data_size": 65536 00:13:50.851 } 00:13:50.851 ] 00:13:50.851 }' 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.851 23:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.851 23:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.851 23:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.791 "name": "raid_bdev1", 00:13:51.791 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:51.791 "strip_size_kb": 64, 00:13:51.791 "state": "online", 00:13:51.791 "raid_level": "raid5f", 00:13:51.791 "superblock": false, 00:13:51.791 "num_base_bdevs": 3, 00:13:51.791 "num_base_bdevs_discovered": 3, 00:13:51.791 "num_base_bdevs_operational": 3, 00:13:51.791 "process": { 00:13:51.791 "type": "rebuild", 00:13:51.791 "target": "spare", 00:13:51.791 "progress": { 00:13:51.791 "blocks": 45056, 00:13:51.791 "percent": 34 00:13:51.791 } 00:13:51.791 }, 00:13:51.791 "base_bdevs_list": [ 00:13:51.791 { 00:13:51.791 "name": "spare", 00:13:51.791 "uuid": "c93e4ca1-4a7e-5e5b-b97b-6a1d72885d1a", 00:13:51.791 "is_configured": true, 00:13:51.791 "data_offset": 0, 00:13:51.791 "data_size": 65536 00:13:51.791 }, 00:13:51.791 { 00:13:51.791 "name": "BaseBdev2", 00:13:51.791 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:51.791 "is_configured": true, 00:13:51.791 "data_offset": 0, 00:13:51.791 "data_size": 65536 00:13:51.791 }, 00:13:51.791 { 00:13:51.791 "name": "BaseBdev3", 00:13:51.791 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:51.791 "is_configured": true, 00:13:51.791 "data_offset": 0, 00:13:51.791 "data_size": 65536 00:13:51.791 } 00:13:51.791 ] 00:13:51.791 }' 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.791 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.051 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.051 23:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.988 "name": "raid_bdev1", 00:13:52.988 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:52.988 "strip_size_kb": 64, 00:13:52.988 "state": "online", 00:13:52.988 "raid_level": "raid5f", 00:13:52.988 "superblock": false, 00:13:52.988 "num_base_bdevs": 3, 00:13:52.988 "num_base_bdevs_discovered": 3, 00:13:52.988 "num_base_bdevs_operational": 3, 00:13:52.988 "process": { 00:13:52.988 "type": "rebuild", 00:13:52.988 "target": "spare", 00:13:52.988 "progress": { 00:13:52.988 "blocks": 67584, 00:13:52.988 "percent": 51 00:13:52.988 } 00:13:52.988 }, 00:13:52.988 "base_bdevs_list": [ 00:13:52.988 { 00:13:52.988 "name": "spare", 00:13:52.988 "uuid": "c93e4ca1-4a7e-5e5b-b97b-6a1d72885d1a", 00:13:52.988 "is_configured": true, 00:13:52.988 "data_offset": 0, 00:13:52.988 "data_size": 65536 00:13:52.988 }, 00:13:52.988 { 00:13:52.988 "name": "BaseBdev2", 00:13:52.988 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:52.988 "is_configured": true, 00:13:52.988 "data_offset": 0, 00:13:52.988 "data_size": 65536 00:13:52.988 }, 00:13:52.988 { 00:13:52.988 "name": "BaseBdev3", 00:13:52.988 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:52.988 "is_configured": true, 00:13:52.988 "data_offset": 0, 00:13:52.988 "data_size": 65536 00:13:52.988 } 00:13:52.988 ] 00:13:52.988 }' 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.988 23:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.367 "name": "raid_bdev1", 00:13:54.367 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:54.367 "strip_size_kb": 64, 00:13:54.367 "state": "online", 00:13:54.367 "raid_level": "raid5f", 00:13:54.367 "superblock": false, 00:13:54.367 "num_base_bdevs": 3, 00:13:54.367 "num_base_bdevs_discovered": 3, 00:13:54.367 "num_base_bdevs_operational": 3, 00:13:54.367 "process": { 00:13:54.367 "type": "rebuild", 00:13:54.367 "target": "spare", 00:13:54.367 "progress": { 00:13:54.367 "blocks": 92160, 00:13:54.367 "percent": 70 00:13:54.367 } 00:13:54.367 }, 00:13:54.367 "base_bdevs_list": [ 00:13:54.367 { 00:13:54.367 "name": "spare", 00:13:54.367 "uuid": "c93e4ca1-4a7e-5e5b-b97b-6a1d72885d1a", 00:13:54.367 "is_configured": true, 00:13:54.367 "data_offset": 0, 00:13:54.367 "data_size": 65536 00:13:54.367 }, 00:13:54.367 { 00:13:54.367 "name": "BaseBdev2", 00:13:54.367 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:54.367 "is_configured": true, 00:13:54.367 "data_offset": 0, 00:13:54.367 "data_size": 65536 00:13:54.367 }, 00:13:54.367 { 00:13:54.367 "name": "BaseBdev3", 00:13:54.367 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:54.367 "is_configured": true, 00:13:54.367 "data_offset": 0, 00:13:54.367 "data_size": 65536 00:13:54.367 } 00:13:54.367 ] 00:13:54.367 }' 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.367 23:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.307 "name": "raid_bdev1", 00:13:55.307 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:55.307 "strip_size_kb": 64, 00:13:55.307 "state": "online", 00:13:55.307 "raid_level": "raid5f", 00:13:55.307 "superblock": false, 00:13:55.307 "num_base_bdevs": 3, 00:13:55.307 "num_base_bdevs_discovered": 3, 00:13:55.307 "num_base_bdevs_operational": 3, 00:13:55.307 "process": { 00:13:55.307 "type": "rebuild", 00:13:55.307 "target": "spare", 00:13:55.307 "progress": { 00:13:55.307 "blocks": 114688, 00:13:55.307 "percent": 87 00:13:55.307 } 00:13:55.307 }, 00:13:55.307 "base_bdevs_list": [ 00:13:55.307 { 00:13:55.307 "name": "spare", 00:13:55.307 "uuid": "c93e4ca1-4a7e-5e5b-b97b-6a1d72885d1a", 00:13:55.307 "is_configured": true, 00:13:55.307 "data_offset": 0, 00:13:55.307 "data_size": 65536 00:13:55.307 }, 00:13:55.307 { 00:13:55.307 "name": "BaseBdev2", 00:13:55.307 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:55.307 "is_configured": true, 00:13:55.307 "data_offset": 0, 00:13:55.307 "data_size": 65536 00:13:55.307 }, 00:13:55.307 { 00:13:55.307 "name": "BaseBdev3", 00:13:55.307 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:55.307 "is_configured": true, 00:13:55.307 "data_offset": 0, 00:13:55.307 "data_size": 65536 00:13:55.307 } 00:13:55.307 ] 00:13:55.307 }' 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.307 23:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.875 [2024-11-18 23:09:15.216875] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:55.875 [2024-11-18 23:09:15.216939] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:55.875 [2024-11-18 23:09:15.216979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.444 "name": "raid_bdev1", 00:13:56.444 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:56.444 "strip_size_kb": 64, 00:13:56.444 "state": "online", 00:13:56.444 "raid_level": "raid5f", 00:13:56.444 "superblock": false, 00:13:56.444 "num_base_bdevs": 3, 00:13:56.444 "num_base_bdevs_discovered": 3, 00:13:56.444 "num_base_bdevs_operational": 3, 00:13:56.444 "base_bdevs_list": [ 00:13:56.444 { 00:13:56.444 "name": "spare", 00:13:56.444 "uuid": "c93e4ca1-4a7e-5e5b-b97b-6a1d72885d1a", 00:13:56.444 "is_configured": true, 00:13:56.444 "data_offset": 0, 00:13:56.444 "data_size": 65536 00:13:56.444 }, 00:13:56.444 { 00:13:56.444 "name": "BaseBdev2", 00:13:56.444 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:56.444 "is_configured": true, 00:13:56.444 "data_offset": 0, 00:13:56.444 "data_size": 65536 00:13:56.444 }, 00:13:56.444 { 00:13:56.444 "name": "BaseBdev3", 00:13:56.444 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:56.444 "is_configured": true, 00:13:56.444 "data_offset": 0, 00:13:56.444 "data_size": 65536 00:13:56.444 } 00:13:56.444 ] 00:13:56.444 }' 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.444 23:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.703 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.703 "name": "raid_bdev1", 00:13:56.703 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:56.703 "strip_size_kb": 64, 00:13:56.703 "state": "online", 00:13:56.703 "raid_level": "raid5f", 00:13:56.703 "superblock": false, 00:13:56.703 "num_base_bdevs": 3, 00:13:56.703 "num_base_bdevs_discovered": 3, 00:13:56.704 "num_base_bdevs_operational": 3, 00:13:56.704 "base_bdevs_list": [ 00:13:56.704 { 00:13:56.704 "name": "spare", 00:13:56.704 "uuid": "c93e4ca1-4a7e-5e5b-b97b-6a1d72885d1a", 00:13:56.704 "is_configured": true, 00:13:56.704 "data_offset": 0, 00:13:56.704 "data_size": 65536 00:13:56.704 }, 00:13:56.704 { 00:13:56.704 "name": "BaseBdev2", 00:13:56.704 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:56.704 "is_configured": true, 00:13:56.704 "data_offset": 0, 00:13:56.704 "data_size": 65536 00:13:56.704 }, 00:13:56.704 { 00:13:56.704 "name": "BaseBdev3", 00:13:56.704 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:56.704 "is_configured": true, 00:13:56.704 "data_offset": 0, 00:13:56.704 "data_size": 65536 00:13:56.704 } 00:13:56.704 ] 00:13:56.704 }' 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.704 "name": "raid_bdev1", 00:13:56.704 "uuid": "082cecdb-c39c-4d46-9709-6144b4aa5066", 00:13:56.704 "strip_size_kb": 64, 00:13:56.704 "state": "online", 00:13:56.704 "raid_level": "raid5f", 00:13:56.704 "superblock": false, 00:13:56.704 "num_base_bdevs": 3, 00:13:56.704 "num_base_bdevs_discovered": 3, 00:13:56.704 "num_base_bdevs_operational": 3, 00:13:56.704 "base_bdevs_list": [ 00:13:56.704 { 00:13:56.704 "name": "spare", 00:13:56.704 "uuid": "c93e4ca1-4a7e-5e5b-b97b-6a1d72885d1a", 00:13:56.704 "is_configured": true, 00:13:56.704 "data_offset": 0, 00:13:56.704 "data_size": 65536 00:13:56.704 }, 00:13:56.704 { 00:13:56.704 "name": "BaseBdev2", 00:13:56.704 "uuid": "cdea4bd7-78cc-517a-8cc5-db323733a18e", 00:13:56.704 "is_configured": true, 00:13:56.704 "data_offset": 0, 00:13:56.704 "data_size": 65536 00:13:56.704 }, 00:13:56.704 { 00:13:56.704 "name": "BaseBdev3", 00:13:56.704 "uuid": "25f50366-14ba-5434-b973-d367a50b67a1", 00:13:56.704 "is_configured": true, 00:13:56.704 "data_offset": 0, 00:13:56.704 "data_size": 65536 00:13:56.704 } 00:13:56.704 ] 00:13:56.704 }' 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.704 23:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.964 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.964 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.964 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.964 [2024-11-18 23:09:16.332139] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.964 [2024-11-18 23:09:16.332221] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.964 [2024-11-18 23:09:16.332326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.964 [2024-11-18 23:09:16.332402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.964 [2024-11-18 23:09:16.332417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:56.964 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:57.225 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:57.225 /dev/nbd0 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.485 1+0 records in 00:13:57.485 1+0 records out 00:13:57.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571883 s, 7.2 MB/s 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.485 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:57.486 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:57.486 /dev/nbd1 00:13:57.746 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:57.746 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:57.746 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:57.746 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:57.746 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.747 1+0 records in 00:13:57.747 1+0 records out 00:13:57.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049778 s, 8.2 MB/s 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.747 23:09:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:58.007 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:58.007 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:58.007 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:58.007 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.007 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.007 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:58.007 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:58.007 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.007 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.007 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92051 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92051 ']' 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92051 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92051 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.266 killing process with pid 92051 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92051' 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92051 00:13:58.266 Received shutdown signal, test time was about 60.000000 seconds 00:13:58.266 00:13:58.266 Latency(us) 00:13:58.266 [2024-11-18T23:09:17.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.266 [2024-11-18T23:09:17.644Z] =================================================================================================================== 00:13:58.266 [2024-11-18T23:09:17.644Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:58.266 [2024-11-18 23:09:17.473064] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.266 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92051 00:13:58.266 [2024-11-18 23:09:17.513278] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:58.525 00:13:58.525 real 0m13.650s 00:13:58.525 user 0m17.030s 00:13:58.525 sys 0m2.030s 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.525 ************************************ 00:13:58.525 END TEST raid5f_rebuild_test 00:13:58.525 ************************************ 00:13:58.525 23:09:17 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:58.525 23:09:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:58.525 23:09:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.525 23:09:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.525 ************************************ 00:13:58.525 START TEST raid5f_rebuild_test_sb 00:13:58.525 ************************************ 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92470 00:13:58.525 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92470 00:13:58.526 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:58.526 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92470 ']' 00:13:58.526 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.526 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:58.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.526 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.526 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:58.526 23:09:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.785 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.785 Zero copy mechanism will not be used. 00:13:58.785 [2024-11-18 23:09:17.926937] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:58.785 [2024-11-18 23:09:17.927067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92470 ] 00:13:58.785 [2024-11-18 23:09:18.090589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.785 [2024-11-18 23:09:18.137822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.044 [2024-11-18 23:09:18.180847] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.044 [2024-11-18 23:09:18.180885] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.613 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.613 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:59.613 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.614 BaseBdev1_malloc 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.614 [2024-11-18 23:09:18.774931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:59.614 [2024-11-18 23:09:18.774992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.614 [2024-11-18 23:09:18.775015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:59.614 [2024-11-18 23:09:18.775029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.614 [2024-11-18 23:09:18.777120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.614 [2024-11-18 23:09:18.777170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:59.614 BaseBdev1 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.614 BaseBdev2_malloc 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.614 [2024-11-18 23:09:18.816981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:59.614 [2024-11-18 23:09:18.817079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.614 [2024-11-18 23:09:18.817122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:59.614 [2024-11-18 23:09:18.817143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.614 [2024-11-18 23:09:18.821490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.614 [2024-11-18 23:09:18.821540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:59.614 BaseBdev2 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.614 BaseBdev3_malloc 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.614 [2024-11-18 23:09:18.847315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:59.614 [2024-11-18 23:09:18.847355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.614 [2024-11-18 23:09:18.847376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:59.614 [2024-11-18 23:09:18.847401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.614 [2024-11-18 23:09:18.849365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.614 [2024-11-18 23:09:18.849395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:59.614 BaseBdev3 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.614 spare_malloc 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.614 spare_delay 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.614 [2024-11-18 23:09:18.887641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:59.614 [2024-11-18 23:09:18.887685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.614 [2024-11-18 23:09:18.887721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:59.614 [2024-11-18 23:09:18.887729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.614 [2024-11-18 23:09:18.889710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.614 [2024-11-18 23:09:18.889754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:59.614 spare 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.614 [2024-11-18 23:09:18.899693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.614 [2024-11-18 23:09:18.901445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.614 [2024-11-18 23:09:18.901507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.614 [2024-11-18 23:09:18.901644] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:59.614 [2024-11-18 23:09:18.901658] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:59.614 [2024-11-18 23:09:18.901895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:59.614 [2024-11-18 23:09:18.902292] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:59.614 [2024-11-18 23:09:18.902322] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:59.614 [2024-11-18 23:09:18.902440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.614 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.614 "name": "raid_bdev1", 00:13:59.614 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:13:59.614 "strip_size_kb": 64, 00:13:59.614 "state": "online", 00:13:59.614 "raid_level": "raid5f", 00:13:59.614 "superblock": true, 00:13:59.614 "num_base_bdevs": 3, 00:13:59.614 "num_base_bdevs_discovered": 3, 00:13:59.614 "num_base_bdevs_operational": 3, 00:13:59.614 "base_bdevs_list": [ 00:13:59.614 { 00:13:59.614 "name": "BaseBdev1", 00:13:59.614 "uuid": "e7037802-a1d1-5ca9-b893-77d895621dcd", 00:13:59.614 "is_configured": true, 00:13:59.614 "data_offset": 2048, 00:13:59.614 "data_size": 63488 00:13:59.614 }, 00:13:59.614 { 00:13:59.614 "name": "BaseBdev2", 00:13:59.614 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:13:59.614 "is_configured": true, 00:13:59.614 "data_offset": 2048, 00:13:59.614 "data_size": 63488 00:13:59.614 }, 00:13:59.614 { 00:13:59.614 "name": "BaseBdev3", 00:13:59.614 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:13:59.614 "is_configured": true, 00:13:59.614 "data_offset": 2048, 00:13:59.614 "data_size": 63488 00:13:59.615 } 00:13:59.615 ] 00:13:59.615 }' 00:13:59.615 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.615 23:09:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.184 [2024-11-18 23:09:19.403112] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.184 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:00.443 [2024-11-18 23:09:19.678485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:00.443 /dev/nbd0 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.443 1+0 records in 00:14:00.443 1+0 records out 00:14:00.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481264 s, 8.5 MB/s 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:00.443 23:09:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:01.013 496+0 records in 00:14:01.013 496+0 records out 00:14:01.013 65011712 bytes (65 MB, 62 MiB) copied, 0.306969 s, 212 MB/s 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.013 [2024-11-18 23:09:20.317325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.013 [2024-11-18 23:09:20.333401] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.013 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.014 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.014 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.014 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.014 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.014 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.014 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.014 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.014 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.274 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.274 "name": "raid_bdev1", 00:14:01.274 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:01.274 "strip_size_kb": 64, 00:14:01.274 "state": "online", 00:14:01.274 "raid_level": "raid5f", 00:14:01.274 "superblock": true, 00:14:01.274 "num_base_bdevs": 3, 00:14:01.274 "num_base_bdevs_discovered": 2, 00:14:01.274 "num_base_bdevs_operational": 2, 00:14:01.274 "base_bdevs_list": [ 00:14:01.274 { 00:14:01.274 "name": null, 00:14:01.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.274 "is_configured": false, 00:14:01.274 "data_offset": 0, 00:14:01.274 "data_size": 63488 00:14:01.274 }, 00:14:01.274 { 00:14:01.274 "name": "BaseBdev2", 00:14:01.274 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:01.274 "is_configured": true, 00:14:01.274 "data_offset": 2048, 00:14:01.274 "data_size": 63488 00:14:01.274 }, 00:14:01.274 { 00:14:01.274 "name": "BaseBdev3", 00:14:01.274 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:01.274 "is_configured": true, 00:14:01.274 "data_offset": 2048, 00:14:01.274 "data_size": 63488 00:14:01.274 } 00:14:01.274 ] 00:14:01.274 }' 00:14:01.274 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.274 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.534 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.534 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.534 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.534 [2024-11-18 23:09:20.832530] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.534 [2024-11-18 23:09:20.836316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:14:01.534 [2024-11-18 23:09:20.838463] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.534 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.534 23:09:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:02.472 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.472 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.472 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.472 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.472 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.732 "name": "raid_bdev1", 00:14:02.732 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:02.732 "strip_size_kb": 64, 00:14:02.732 "state": "online", 00:14:02.732 "raid_level": "raid5f", 00:14:02.732 "superblock": true, 00:14:02.732 "num_base_bdevs": 3, 00:14:02.732 "num_base_bdevs_discovered": 3, 00:14:02.732 "num_base_bdevs_operational": 3, 00:14:02.732 "process": { 00:14:02.732 "type": "rebuild", 00:14:02.732 "target": "spare", 00:14:02.732 "progress": { 00:14:02.732 "blocks": 20480, 00:14:02.732 "percent": 16 00:14:02.732 } 00:14:02.732 }, 00:14:02.732 "base_bdevs_list": [ 00:14:02.732 { 00:14:02.732 "name": "spare", 00:14:02.732 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:02.732 "is_configured": true, 00:14:02.732 "data_offset": 2048, 00:14:02.732 "data_size": 63488 00:14:02.732 }, 00:14:02.732 { 00:14:02.732 "name": "BaseBdev2", 00:14:02.732 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:02.732 "is_configured": true, 00:14:02.732 "data_offset": 2048, 00:14:02.732 "data_size": 63488 00:14:02.732 }, 00:14:02.732 { 00:14:02.732 "name": "BaseBdev3", 00:14:02.732 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:02.732 "is_configured": true, 00:14:02.732 "data_offset": 2048, 00:14:02.732 "data_size": 63488 00:14:02.732 } 00:14:02.732 ] 00:14:02.732 }' 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.732 23:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 [2024-11-18 23:09:22.001207] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.732 [2024-11-18 23:09:22.045126] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.732 [2024-11-18 23:09:22.045181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.732 [2024-11-18 23:09:22.045195] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.732 [2024-11-18 23:09:22.045208] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.992 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.992 "name": "raid_bdev1", 00:14:02.992 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:02.992 "strip_size_kb": 64, 00:14:02.992 "state": "online", 00:14:02.992 "raid_level": "raid5f", 00:14:02.992 "superblock": true, 00:14:02.992 "num_base_bdevs": 3, 00:14:02.992 "num_base_bdevs_discovered": 2, 00:14:02.992 "num_base_bdevs_operational": 2, 00:14:02.992 "base_bdevs_list": [ 00:14:02.992 { 00:14:02.992 "name": null, 00:14:02.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.992 "is_configured": false, 00:14:02.992 "data_offset": 0, 00:14:02.992 "data_size": 63488 00:14:02.992 }, 00:14:02.992 { 00:14:02.992 "name": "BaseBdev2", 00:14:02.992 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:02.992 "is_configured": true, 00:14:02.992 "data_offset": 2048, 00:14:02.992 "data_size": 63488 00:14:02.992 }, 00:14:02.992 { 00:14:02.992 "name": "BaseBdev3", 00:14:02.992 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:02.992 "is_configured": true, 00:14:02.992 "data_offset": 2048, 00:14:02.992 "data_size": 63488 00:14:02.992 } 00:14:02.992 ] 00:14:02.992 }' 00:14:02.992 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.992 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.261 "name": "raid_bdev1", 00:14:03.261 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:03.261 "strip_size_kb": 64, 00:14:03.261 "state": "online", 00:14:03.261 "raid_level": "raid5f", 00:14:03.261 "superblock": true, 00:14:03.261 "num_base_bdevs": 3, 00:14:03.261 "num_base_bdevs_discovered": 2, 00:14:03.261 "num_base_bdevs_operational": 2, 00:14:03.261 "base_bdevs_list": [ 00:14:03.261 { 00:14:03.261 "name": null, 00:14:03.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.261 "is_configured": false, 00:14:03.261 "data_offset": 0, 00:14:03.261 "data_size": 63488 00:14:03.261 }, 00:14:03.261 { 00:14:03.261 "name": "BaseBdev2", 00:14:03.261 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:03.261 "is_configured": true, 00:14:03.261 "data_offset": 2048, 00:14:03.261 "data_size": 63488 00:14:03.261 }, 00:14:03.261 { 00:14:03.261 "name": "BaseBdev3", 00:14:03.261 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:03.261 "is_configured": true, 00:14:03.261 "data_offset": 2048, 00:14:03.261 "data_size": 63488 00:14:03.261 } 00:14:03.261 ] 00:14:03.261 }' 00:14:03.261 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.533 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.533 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.533 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.533 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.533 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.533 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.533 [2024-11-18 23:09:22.689513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.533 [2024-11-18 23:09:22.693189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:14:03.533 [2024-11-18 23:09:22.695252] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.533 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.533 23:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:04.471 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.471 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.471 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.471 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.471 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.471 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.471 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.471 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.471 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.471 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.471 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.471 "name": "raid_bdev1", 00:14:04.471 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:04.471 "strip_size_kb": 64, 00:14:04.471 "state": "online", 00:14:04.471 "raid_level": "raid5f", 00:14:04.471 "superblock": true, 00:14:04.471 "num_base_bdevs": 3, 00:14:04.471 "num_base_bdevs_discovered": 3, 00:14:04.471 "num_base_bdevs_operational": 3, 00:14:04.471 "process": { 00:14:04.471 "type": "rebuild", 00:14:04.471 "target": "spare", 00:14:04.471 "progress": { 00:14:04.471 "blocks": 20480, 00:14:04.471 "percent": 16 00:14:04.471 } 00:14:04.471 }, 00:14:04.471 "base_bdevs_list": [ 00:14:04.472 { 00:14:04.472 "name": "spare", 00:14:04.472 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:04.472 "is_configured": true, 00:14:04.472 "data_offset": 2048, 00:14:04.472 "data_size": 63488 00:14:04.472 }, 00:14:04.472 { 00:14:04.472 "name": "BaseBdev2", 00:14:04.472 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:04.472 "is_configured": true, 00:14:04.472 "data_offset": 2048, 00:14:04.472 "data_size": 63488 00:14:04.472 }, 00:14:04.472 { 00:14:04.472 "name": "BaseBdev3", 00:14:04.472 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:04.472 "is_configured": true, 00:14:04.472 "data_offset": 2048, 00:14:04.472 "data_size": 63488 00:14:04.472 } 00:14:04.472 ] 00:14:04.472 }' 00:14:04.472 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.472 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.472 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:04.732 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=459 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.732 "name": "raid_bdev1", 00:14:04.732 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:04.732 "strip_size_kb": 64, 00:14:04.732 "state": "online", 00:14:04.732 "raid_level": "raid5f", 00:14:04.732 "superblock": true, 00:14:04.732 "num_base_bdevs": 3, 00:14:04.732 "num_base_bdevs_discovered": 3, 00:14:04.732 "num_base_bdevs_operational": 3, 00:14:04.732 "process": { 00:14:04.732 "type": "rebuild", 00:14:04.732 "target": "spare", 00:14:04.732 "progress": { 00:14:04.732 "blocks": 22528, 00:14:04.732 "percent": 17 00:14:04.732 } 00:14:04.732 }, 00:14:04.732 "base_bdevs_list": [ 00:14:04.732 { 00:14:04.732 "name": "spare", 00:14:04.732 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:04.732 "is_configured": true, 00:14:04.732 "data_offset": 2048, 00:14:04.732 "data_size": 63488 00:14:04.732 }, 00:14:04.732 { 00:14:04.732 "name": "BaseBdev2", 00:14:04.732 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:04.732 "is_configured": true, 00:14:04.732 "data_offset": 2048, 00:14:04.732 "data_size": 63488 00:14:04.732 }, 00:14:04.732 { 00:14:04.732 "name": "BaseBdev3", 00:14:04.732 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:04.732 "is_configured": true, 00:14:04.732 "data_offset": 2048, 00:14:04.732 "data_size": 63488 00:14:04.732 } 00:14:04.732 ] 00:14:04.732 }' 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.732 23:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.677 23:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.677 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.677 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.677 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.677 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.677 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.677 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.677 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.677 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.677 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.677 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.677 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.677 "name": "raid_bdev1", 00:14:05.677 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:05.677 "strip_size_kb": 64, 00:14:05.678 "state": "online", 00:14:05.678 "raid_level": "raid5f", 00:14:05.678 "superblock": true, 00:14:05.678 "num_base_bdevs": 3, 00:14:05.678 "num_base_bdevs_discovered": 3, 00:14:05.678 "num_base_bdevs_operational": 3, 00:14:05.678 "process": { 00:14:05.678 "type": "rebuild", 00:14:05.678 "target": "spare", 00:14:05.678 "progress": { 00:14:05.678 "blocks": 47104, 00:14:05.678 "percent": 37 00:14:05.678 } 00:14:05.678 }, 00:14:05.678 "base_bdevs_list": [ 00:14:05.678 { 00:14:05.678 "name": "spare", 00:14:05.678 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:05.678 "is_configured": true, 00:14:05.678 "data_offset": 2048, 00:14:05.678 "data_size": 63488 00:14:05.678 }, 00:14:05.678 { 00:14:05.678 "name": "BaseBdev2", 00:14:05.678 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:05.678 "is_configured": true, 00:14:05.678 "data_offset": 2048, 00:14:05.678 "data_size": 63488 00:14:05.678 }, 00:14:05.678 { 00:14:05.678 "name": "BaseBdev3", 00:14:05.678 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:05.678 "is_configured": true, 00:14:05.678 "data_offset": 2048, 00:14:05.678 "data_size": 63488 00:14:05.678 } 00:14:05.678 ] 00:14:05.678 }' 00:14:05.678 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.937 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.937 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.937 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.937 23:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.877 "name": "raid_bdev1", 00:14:06.877 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:06.877 "strip_size_kb": 64, 00:14:06.877 "state": "online", 00:14:06.877 "raid_level": "raid5f", 00:14:06.877 "superblock": true, 00:14:06.877 "num_base_bdevs": 3, 00:14:06.877 "num_base_bdevs_discovered": 3, 00:14:06.877 "num_base_bdevs_operational": 3, 00:14:06.877 "process": { 00:14:06.877 "type": "rebuild", 00:14:06.877 "target": "spare", 00:14:06.877 "progress": { 00:14:06.877 "blocks": 69632, 00:14:06.877 "percent": 54 00:14:06.877 } 00:14:06.877 }, 00:14:06.877 "base_bdevs_list": [ 00:14:06.877 { 00:14:06.877 "name": "spare", 00:14:06.877 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:06.877 "is_configured": true, 00:14:06.877 "data_offset": 2048, 00:14:06.877 "data_size": 63488 00:14:06.877 }, 00:14:06.877 { 00:14:06.877 "name": "BaseBdev2", 00:14:06.877 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:06.877 "is_configured": true, 00:14:06.877 "data_offset": 2048, 00:14:06.877 "data_size": 63488 00:14:06.877 }, 00:14:06.877 { 00:14:06.877 "name": "BaseBdev3", 00:14:06.877 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:06.877 "is_configured": true, 00:14:06.877 "data_offset": 2048, 00:14:06.877 "data_size": 63488 00:14:06.877 } 00:14:06.877 ] 00:14:06.877 }' 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.877 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.138 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.138 23:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:08.075 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.075 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.075 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.075 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.075 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.075 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.075 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.075 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.075 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.076 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.076 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.076 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.076 "name": "raid_bdev1", 00:14:08.076 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:08.076 "strip_size_kb": 64, 00:14:08.076 "state": "online", 00:14:08.076 "raid_level": "raid5f", 00:14:08.076 "superblock": true, 00:14:08.076 "num_base_bdevs": 3, 00:14:08.076 "num_base_bdevs_discovered": 3, 00:14:08.076 "num_base_bdevs_operational": 3, 00:14:08.076 "process": { 00:14:08.076 "type": "rebuild", 00:14:08.076 "target": "spare", 00:14:08.076 "progress": { 00:14:08.076 "blocks": 92160, 00:14:08.076 "percent": 72 00:14:08.076 } 00:14:08.076 }, 00:14:08.076 "base_bdevs_list": [ 00:14:08.076 { 00:14:08.076 "name": "spare", 00:14:08.076 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:08.076 "is_configured": true, 00:14:08.076 "data_offset": 2048, 00:14:08.076 "data_size": 63488 00:14:08.076 }, 00:14:08.076 { 00:14:08.076 "name": "BaseBdev2", 00:14:08.076 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:08.076 "is_configured": true, 00:14:08.076 "data_offset": 2048, 00:14:08.076 "data_size": 63488 00:14:08.076 }, 00:14:08.076 { 00:14:08.076 "name": "BaseBdev3", 00:14:08.076 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:08.076 "is_configured": true, 00:14:08.076 "data_offset": 2048, 00:14:08.076 "data_size": 63488 00:14:08.076 } 00:14:08.076 ] 00:14:08.076 }' 00:14:08.076 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.076 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.076 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.076 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.076 23:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.455 "name": "raid_bdev1", 00:14:09.455 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:09.455 "strip_size_kb": 64, 00:14:09.455 "state": "online", 00:14:09.455 "raid_level": "raid5f", 00:14:09.455 "superblock": true, 00:14:09.455 "num_base_bdevs": 3, 00:14:09.455 "num_base_bdevs_discovered": 3, 00:14:09.455 "num_base_bdevs_operational": 3, 00:14:09.455 "process": { 00:14:09.455 "type": "rebuild", 00:14:09.455 "target": "spare", 00:14:09.455 "progress": { 00:14:09.455 "blocks": 116736, 00:14:09.455 "percent": 91 00:14:09.455 } 00:14:09.455 }, 00:14:09.455 "base_bdevs_list": [ 00:14:09.455 { 00:14:09.455 "name": "spare", 00:14:09.455 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:09.455 "is_configured": true, 00:14:09.455 "data_offset": 2048, 00:14:09.455 "data_size": 63488 00:14:09.455 }, 00:14:09.455 { 00:14:09.455 "name": "BaseBdev2", 00:14:09.455 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:09.455 "is_configured": true, 00:14:09.455 "data_offset": 2048, 00:14:09.455 "data_size": 63488 00:14:09.455 }, 00:14:09.455 { 00:14:09.455 "name": "BaseBdev3", 00:14:09.455 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:09.455 "is_configured": true, 00:14:09.455 "data_offset": 2048, 00:14:09.455 "data_size": 63488 00:14:09.455 } 00:14:09.455 ] 00:14:09.455 }' 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.455 23:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.714 [2024-11-18 23:09:28.926841] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:09.714 [2024-11-18 23:09:28.926903] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:09.714 [2024-11-18 23:09:28.926994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.296 "name": "raid_bdev1", 00:14:10.296 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:10.296 "strip_size_kb": 64, 00:14:10.296 "state": "online", 00:14:10.296 "raid_level": "raid5f", 00:14:10.296 "superblock": true, 00:14:10.296 "num_base_bdevs": 3, 00:14:10.296 "num_base_bdevs_discovered": 3, 00:14:10.296 "num_base_bdevs_operational": 3, 00:14:10.296 "base_bdevs_list": [ 00:14:10.296 { 00:14:10.296 "name": "spare", 00:14:10.296 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:10.296 "is_configured": true, 00:14:10.296 "data_offset": 2048, 00:14:10.296 "data_size": 63488 00:14:10.296 }, 00:14:10.296 { 00:14:10.296 "name": "BaseBdev2", 00:14:10.296 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:10.296 "is_configured": true, 00:14:10.296 "data_offset": 2048, 00:14:10.296 "data_size": 63488 00:14:10.296 }, 00:14:10.296 { 00:14:10.296 "name": "BaseBdev3", 00:14:10.296 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:10.296 "is_configured": true, 00:14:10.296 "data_offset": 2048, 00:14:10.296 "data_size": 63488 00:14:10.296 } 00:14:10.296 ] 00:14:10.296 }' 00:14:10.296 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.555 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.556 "name": "raid_bdev1", 00:14:10.556 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:10.556 "strip_size_kb": 64, 00:14:10.556 "state": "online", 00:14:10.556 "raid_level": "raid5f", 00:14:10.556 "superblock": true, 00:14:10.556 "num_base_bdevs": 3, 00:14:10.556 "num_base_bdevs_discovered": 3, 00:14:10.556 "num_base_bdevs_operational": 3, 00:14:10.556 "base_bdevs_list": [ 00:14:10.556 { 00:14:10.556 "name": "spare", 00:14:10.556 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:10.556 "is_configured": true, 00:14:10.556 "data_offset": 2048, 00:14:10.556 "data_size": 63488 00:14:10.556 }, 00:14:10.556 { 00:14:10.556 "name": "BaseBdev2", 00:14:10.556 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:10.556 "is_configured": true, 00:14:10.556 "data_offset": 2048, 00:14:10.556 "data_size": 63488 00:14:10.556 }, 00:14:10.556 { 00:14:10.556 "name": "BaseBdev3", 00:14:10.556 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:10.556 "is_configured": true, 00:14:10.556 "data_offset": 2048, 00:14:10.556 "data_size": 63488 00:14:10.556 } 00:14:10.556 ] 00:14:10.556 }' 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.556 "name": "raid_bdev1", 00:14:10.556 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:10.556 "strip_size_kb": 64, 00:14:10.556 "state": "online", 00:14:10.556 "raid_level": "raid5f", 00:14:10.556 "superblock": true, 00:14:10.556 "num_base_bdevs": 3, 00:14:10.556 "num_base_bdevs_discovered": 3, 00:14:10.556 "num_base_bdevs_operational": 3, 00:14:10.556 "base_bdevs_list": [ 00:14:10.556 { 00:14:10.556 "name": "spare", 00:14:10.556 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:10.556 "is_configured": true, 00:14:10.556 "data_offset": 2048, 00:14:10.556 "data_size": 63488 00:14:10.556 }, 00:14:10.556 { 00:14:10.556 "name": "BaseBdev2", 00:14:10.556 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:10.556 "is_configured": true, 00:14:10.556 "data_offset": 2048, 00:14:10.556 "data_size": 63488 00:14:10.556 }, 00:14:10.556 { 00:14:10.556 "name": "BaseBdev3", 00:14:10.556 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:10.556 "is_configured": true, 00:14:10.556 "data_offset": 2048, 00:14:10.556 "data_size": 63488 00:14:10.556 } 00:14:10.556 ] 00:14:10.556 }' 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.556 23:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.126 [2024-11-18 23:09:30.317592] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.126 [2024-11-18 23:09:30.317627] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.126 [2024-11-18 23:09:30.317723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.126 [2024-11-18 23:09:30.317810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.126 [2024-11-18 23:09:30.317833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.126 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:11.385 /dev/nbd0 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.385 1+0 records in 00:14:11.385 1+0 records out 00:14:11.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292749 s, 14.0 MB/s 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.385 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:11.645 /dev/nbd1 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.645 1+0 records in 00:14:11.645 1+0 records out 00:14:11.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576782 s, 7.1 MB/s 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.645 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:11.646 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.646 23:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:11.905 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.905 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.905 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.905 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.905 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.905 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.905 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:11.905 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.905 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.905 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.164 [2024-11-18 23:09:31.419468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.164 [2024-11-18 23:09:31.419521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.164 [2024-11-18 23:09:31.419543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:12.164 [2024-11-18 23:09:31.419551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.164 [2024-11-18 23:09:31.421686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.164 [2024-11-18 23:09:31.421767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.164 [2024-11-18 23:09:31.421850] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:12.164 [2024-11-18 23:09:31.421888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.164 [2024-11-18 23:09:31.421992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.164 [2024-11-18 23:09:31.422092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.164 spare 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.164 [2024-11-18 23:09:31.521981] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:12.164 [2024-11-18 23:09:31.522004] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:12.164 [2024-11-18 23:09:31.522240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:14:12.164 [2024-11-18 23:09:31.522651] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:12.164 [2024-11-18 23:09:31.522666] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:12.164 [2024-11-18 23:09:31.522809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.164 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.423 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.423 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.423 "name": "raid_bdev1", 00:14:12.423 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:12.423 "strip_size_kb": 64, 00:14:12.423 "state": "online", 00:14:12.423 "raid_level": "raid5f", 00:14:12.423 "superblock": true, 00:14:12.423 "num_base_bdevs": 3, 00:14:12.423 "num_base_bdevs_discovered": 3, 00:14:12.423 "num_base_bdevs_operational": 3, 00:14:12.423 "base_bdevs_list": [ 00:14:12.423 { 00:14:12.423 "name": "spare", 00:14:12.423 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:12.423 "is_configured": true, 00:14:12.423 "data_offset": 2048, 00:14:12.423 "data_size": 63488 00:14:12.423 }, 00:14:12.423 { 00:14:12.423 "name": "BaseBdev2", 00:14:12.423 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:12.423 "is_configured": true, 00:14:12.423 "data_offset": 2048, 00:14:12.423 "data_size": 63488 00:14:12.423 }, 00:14:12.423 { 00:14:12.423 "name": "BaseBdev3", 00:14:12.423 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:12.423 "is_configured": true, 00:14:12.423 "data_offset": 2048, 00:14:12.423 "data_size": 63488 00:14:12.423 } 00:14:12.423 ] 00:14:12.423 }' 00:14:12.423 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.423 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.683 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.683 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.683 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.683 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.683 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.683 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.683 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.683 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.683 23:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.683 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.683 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.683 "name": "raid_bdev1", 00:14:12.683 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:12.683 "strip_size_kb": 64, 00:14:12.683 "state": "online", 00:14:12.683 "raid_level": "raid5f", 00:14:12.683 "superblock": true, 00:14:12.683 "num_base_bdevs": 3, 00:14:12.683 "num_base_bdevs_discovered": 3, 00:14:12.683 "num_base_bdevs_operational": 3, 00:14:12.683 "base_bdevs_list": [ 00:14:12.683 { 00:14:12.683 "name": "spare", 00:14:12.683 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:12.683 "is_configured": true, 00:14:12.683 "data_offset": 2048, 00:14:12.683 "data_size": 63488 00:14:12.683 }, 00:14:12.683 { 00:14:12.683 "name": "BaseBdev2", 00:14:12.683 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:12.683 "is_configured": true, 00:14:12.683 "data_offset": 2048, 00:14:12.683 "data_size": 63488 00:14:12.683 }, 00:14:12.683 { 00:14:12.683 "name": "BaseBdev3", 00:14:12.683 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:12.683 "is_configured": true, 00:14:12.683 "data_offset": 2048, 00:14:12.683 "data_size": 63488 00:14:12.683 } 00:14:12.683 ] 00:14:12.683 }' 00:14:12.683 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.683 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.683 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.941 [2024-11-18 23:09:32.143025] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.941 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.942 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.942 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.942 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.942 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.942 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.942 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.942 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.942 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.942 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.942 "name": "raid_bdev1", 00:14:12.942 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:12.942 "strip_size_kb": 64, 00:14:12.942 "state": "online", 00:14:12.942 "raid_level": "raid5f", 00:14:12.942 "superblock": true, 00:14:12.942 "num_base_bdevs": 3, 00:14:12.942 "num_base_bdevs_discovered": 2, 00:14:12.942 "num_base_bdevs_operational": 2, 00:14:12.942 "base_bdevs_list": [ 00:14:12.942 { 00:14:12.942 "name": null, 00:14:12.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.942 "is_configured": false, 00:14:12.942 "data_offset": 0, 00:14:12.942 "data_size": 63488 00:14:12.942 }, 00:14:12.942 { 00:14:12.942 "name": "BaseBdev2", 00:14:12.942 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:12.942 "is_configured": true, 00:14:12.942 "data_offset": 2048, 00:14:12.942 "data_size": 63488 00:14:12.942 }, 00:14:12.942 { 00:14:12.942 "name": "BaseBdev3", 00:14:12.942 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:12.942 "is_configured": true, 00:14:12.942 "data_offset": 2048, 00:14:12.942 "data_size": 63488 00:14:12.942 } 00:14:12.942 ] 00:14:12.942 }' 00:14:12.942 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.942 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.509 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.509 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.509 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.509 [2024-11-18 23:09:32.618223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.509 [2024-11-18 23:09:32.618448] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:13.509 [2024-11-18 23:09:32.618516] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:13.509 [2024-11-18 23:09:32.618575] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.509 [2024-11-18 23:09:32.622226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:14:13.509 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.509 23:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:13.509 [2024-11-18 23:09:32.624309] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.448 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.448 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.448 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.448 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.448 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.448 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.448 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.448 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.448 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.448 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.448 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.448 "name": "raid_bdev1", 00:14:14.448 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:14.448 "strip_size_kb": 64, 00:14:14.448 "state": "online", 00:14:14.448 "raid_level": "raid5f", 00:14:14.448 "superblock": true, 00:14:14.448 "num_base_bdevs": 3, 00:14:14.448 "num_base_bdevs_discovered": 3, 00:14:14.448 "num_base_bdevs_operational": 3, 00:14:14.448 "process": { 00:14:14.448 "type": "rebuild", 00:14:14.448 "target": "spare", 00:14:14.448 "progress": { 00:14:14.448 "blocks": 20480, 00:14:14.448 "percent": 16 00:14:14.448 } 00:14:14.448 }, 00:14:14.448 "base_bdevs_list": [ 00:14:14.448 { 00:14:14.448 "name": "spare", 00:14:14.449 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:14.449 "is_configured": true, 00:14:14.449 "data_offset": 2048, 00:14:14.449 "data_size": 63488 00:14:14.449 }, 00:14:14.449 { 00:14:14.449 "name": "BaseBdev2", 00:14:14.449 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:14.449 "is_configured": true, 00:14:14.449 "data_offset": 2048, 00:14:14.449 "data_size": 63488 00:14:14.449 }, 00:14:14.449 { 00:14:14.449 "name": "BaseBdev3", 00:14:14.449 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:14.449 "is_configured": true, 00:14:14.449 "data_offset": 2048, 00:14:14.449 "data_size": 63488 00:14:14.449 } 00:14:14.449 ] 00:14:14.449 }' 00:14:14.449 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.449 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.449 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.449 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.449 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:14.449 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.449 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.449 [2024-11-18 23:09:33.780860] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.709 [2024-11-18 23:09:33.830847] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.709 [2024-11-18 23:09:33.830941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.709 [2024-11-18 23:09:33.830995] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.709 [2024-11-18 23:09:33.831016] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.709 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.709 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:14.709 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.709 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.709 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.710 "name": "raid_bdev1", 00:14:14.710 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:14.710 "strip_size_kb": 64, 00:14:14.710 "state": "online", 00:14:14.710 "raid_level": "raid5f", 00:14:14.710 "superblock": true, 00:14:14.710 "num_base_bdevs": 3, 00:14:14.710 "num_base_bdevs_discovered": 2, 00:14:14.710 "num_base_bdevs_operational": 2, 00:14:14.710 "base_bdevs_list": [ 00:14:14.710 { 00:14:14.710 "name": null, 00:14:14.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.710 "is_configured": false, 00:14:14.710 "data_offset": 0, 00:14:14.710 "data_size": 63488 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "name": "BaseBdev2", 00:14:14.710 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:14.710 "is_configured": true, 00:14:14.710 "data_offset": 2048, 00:14:14.710 "data_size": 63488 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "name": "BaseBdev3", 00:14:14.710 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:14.710 "is_configured": true, 00:14:14.710 "data_offset": 2048, 00:14:14.710 "data_size": 63488 00:14:14.710 } 00:14:14.710 ] 00:14:14.710 }' 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.710 23:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.970 23:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:14.970 23:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.970 23:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.970 [2024-11-18 23:09:34.295325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:14.970 [2024-11-18 23:09:34.295376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.970 [2024-11-18 23:09:34.295396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:14.970 [2024-11-18 23:09:34.295405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.970 [2024-11-18 23:09:34.295840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.970 [2024-11-18 23:09:34.295864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:14.970 [2024-11-18 23:09:34.295941] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:14.970 [2024-11-18 23:09:34.295953] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:14.970 [2024-11-18 23:09:34.295963] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:14.970 [2024-11-18 23:09:34.295981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.970 [2024-11-18 23:09:34.299082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:14:14.970 [2024-11-18 23:09:34.301135] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.970 spare 00:14:14.970 23:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.970 23:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.351 "name": "raid_bdev1", 00:14:16.351 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:16.351 "strip_size_kb": 64, 00:14:16.351 "state": "online", 00:14:16.351 "raid_level": "raid5f", 00:14:16.351 "superblock": true, 00:14:16.351 "num_base_bdevs": 3, 00:14:16.351 "num_base_bdevs_discovered": 3, 00:14:16.351 "num_base_bdevs_operational": 3, 00:14:16.351 "process": { 00:14:16.351 "type": "rebuild", 00:14:16.351 "target": "spare", 00:14:16.351 "progress": { 00:14:16.351 "blocks": 20480, 00:14:16.351 "percent": 16 00:14:16.351 } 00:14:16.351 }, 00:14:16.351 "base_bdevs_list": [ 00:14:16.351 { 00:14:16.351 "name": "spare", 00:14:16.351 "uuid": "cc54c179-ddfb-54f8-8a3c-d606d035feb8", 00:14:16.351 "is_configured": true, 00:14:16.351 "data_offset": 2048, 00:14:16.351 "data_size": 63488 00:14:16.351 }, 00:14:16.351 { 00:14:16.351 "name": "BaseBdev2", 00:14:16.351 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:16.351 "is_configured": true, 00:14:16.351 "data_offset": 2048, 00:14:16.351 "data_size": 63488 00:14:16.351 }, 00:14:16.351 { 00:14:16.351 "name": "BaseBdev3", 00:14:16.351 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:16.351 "is_configured": true, 00:14:16.351 "data_offset": 2048, 00:14:16.351 "data_size": 63488 00:14:16.351 } 00:14:16.351 ] 00:14:16.351 }' 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.351 [2024-11-18 23:09:35.463781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.351 [2024-11-18 23:09:35.507653] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:16.351 [2024-11-18 23:09:35.507707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.351 [2024-11-18 23:09:35.507722] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.351 [2024-11-18 23:09:35.507733] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.351 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.352 "name": "raid_bdev1", 00:14:16.352 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:16.352 "strip_size_kb": 64, 00:14:16.352 "state": "online", 00:14:16.352 "raid_level": "raid5f", 00:14:16.352 "superblock": true, 00:14:16.352 "num_base_bdevs": 3, 00:14:16.352 "num_base_bdevs_discovered": 2, 00:14:16.352 "num_base_bdevs_operational": 2, 00:14:16.352 "base_bdevs_list": [ 00:14:16.352 { 00:14:16.352 "name": null, 00:14:16.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.352 "is_configured": false, 00:14:16.352 "data_offset": 0, 00:14:16.352 "data_size": 63488 00:14:16.352 }, 00:14:16.352 { 00:14:16.352 "name": "BaseBdev2", 00:14:16.352 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:16.352 "is_configured": true, 00:14:16.352 "data_offset": 2048, 00:14:16.352 "data_size": 63488 00:14:16.352 }, 00:14:16.352 { 00:14:16.352 "name": "BaseBdev3", 00:14:16.352 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:16.352 "is_configured": true, 00:14:16.352 "data_offset": 2048, 00:14:16.352 "data_size": 63488 00:14:16.352 } 00:14:16.352 ] 00:14:16.352 }' 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.352 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.612 "name": "raid_bdev1", 00:14:16.612 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:16.612 "strip_size_kb": 64, 00:14:16.612 "state": "online", 00:14:16.612 "raid_level": "raid5f", 00:14:16.612 "superblock": true, 00:14:16.612 "num_base_bdevs": 3, 00:14:16.612 "num_base_bdevs_discovered": 2, 00:14:16.612 "num_base_bdevs_operational": 2, 00:14:16.612 "base_bdevs_list": [ 00:14:16.612 { 00:14:16.612 "name": null, 00:14:16.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.612 "is_configured": false, 00:14:16.612 "data_offset": 0, 00:14:16.612 "data_size": 63488 00:14:16.612 }, 00:14:16.612 { 00:14:16.612 "name": "BaseBdev2", 00:14:16.612 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:16.612 "is_configured": true, 00:14:16.612 "data_offset": 2048, 00:14:16.612 "data_size": 63488 00:14:16.612 }, 00:14:16.612 { 00:14:16.612 "name": "BaseBdev3", 00:14:16.612 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:16.612 "is_configured": true, 00:14:16.612 "data_offset": 2048, 00:14:16.612 "data_size": 63488 00:14:16.612 } 00:14:16.612 ] 00:14:16.612 }' 00:14:16.612 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.872 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.872 23:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.872 23:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.872 23:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:16.872 23:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.872 23:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.872 23:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.872 23:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:16.872 23:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.872 23:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.872 [2024-11-18 23:09:36.055775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:16.872 [2024-11-18 23:09:36.055869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.872 [2024-11-18 23:09:36.055894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:16.872 [2024-11-18 23:09:36.055906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.872 [2024-11-18 23:09:36.056292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.872 [2024-11-18 23:09:36.056330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:16.872 [2024-11-18 23:09:36.056395] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:16.872 [2024-11-18 23:09:36.056409] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:16.872 [2024-11-18 23:09:36.056416] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:16.872 [2024-11-18 23:09:36.056427] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:16.872 BaseBdev1 00:14:16.872 23:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.872 23:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.811 "name": "raid_bdev1", 00:14:17.811 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:17.811 "strip_size_kb": 64, 00:14:17.811 "state": "online", 00:14:17.811 "raid_level": "raid5f", 00:14:17.811 "superblock": true, 00:14:17.811 "num_base_bdevs": 3, 00:14:17.811 "num_base_bdevs_discovered": 2, 00:14:17.811 "num_base_bdevs_operational": 2, 00:14:17.811 "base_bdevs_list": [ 00:14:17.811 { 00:14:17.811 "name": null, 00:14:17.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.811 "is_configured": false, 00:14:17.811 "data_offset": 0, 00:14:17.811 "data_size": 63488 00:14:17.811 }, 00:14:17.811 { 00:14:17.811 "name": "BaseBdev2", 00:14:17.811 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:17.811 "is_configured": true, 00:14:17.811 "data_offset": 2048, 00:14:17.811 "data_size": 63488 00:14:17.811 }, 00:14:17.811 { 00:14:17.811 "name": "BaseBdev3", 00:14:17.811 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:17.811 "is_configured": true, 00:14:17.811 "data_offset": 2048, 00:14:17.811 "data_size": 63488 00:14:17.811 } 00:14:17.811 ] 00:14:17.811 }' 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.811 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.381 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.381 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.381 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.381 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.381 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.381 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.381 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.381 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.381 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.381 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.381 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.381 "name": "raid_bdev1", 00:14:18.381 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:18.381 "strip_size_kb": 64, 00:14:18.381 "state": "online", 00:14:18.381 "raid_level": "raid5f", 00:14:18.381 "superblock": true, 00:14:18.381 "num_base_bdevs": 3, 00:14:18.381 "num_base_bdevs_discovered": 2, 00:14:18.381 "num_base_bdevs_operational": 2, 00:14:18.381 "base_bdevs_list": [ 00:14:18.381 { 00:14:18.381 "name": null, 00:14:18.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.381 "is_configured": false, 00:14:18.381 "data_offset": 0, 00:14:18.381 "data_size": 63488 00:14:18.381 }, 00:14:18.381 { 00:14:18.381 "name": "BaseBdev2", 00:14:18.381 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:18.381 "is_configured": true, 00:14:18.381 "data_offset": 2048, 00:14:18.381 "data_size": 63488 00:14:18.381 }, 00:14:18.381 { 00:14:18.381 "name": "BaseBdev3", 00:14:18.381 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:18.381 "is_configured": true, 00:14:18.381 "data_offset": 2048, 00:14:18.381 "data_size": 63488 00:14:18.381 } 00:14:18.381 ] 00:14:18.381 }' 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.382 [2024-11-18 23:09:37.633148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.382 [2024-11-18 23:09:37.633366] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:18.382 [2024-11-18 23:09:37.633428] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:18.382 request: 00:14:18.382 { 00:14:18.382 "base_bdev": "BaseBdev1", 00:14:18.382 "raid_bdev": "raid_bdev1", 00:14:18.382 "method": "bdev_raid_add_base_bdev", 00:14:18.382 "req_id": 1 00:14:18.382 } 00:14:18.382 Got JSON-RPC error response 00:14:18.382 response: 00:14:18.382 { 00:14:18.382 "code": -22, 00:14:18.382 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:18.382 } 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:18.382 23:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.320 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.590 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.590 "name": "raid_bdev1", 00:14:19.590 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:19.590 "strip_size_kb": 64, 00:14:19.590 "state": "online", 00:14:19.590 "raid_level": "raid5f", 00:14:19.590 "superblock": true, 00:14:19.590 "num_base_bdevs": 3, 00:14:19.590 "num_base_bdevs_discovered": 2, 00:14:19.590 "num_base_bdevs_operational": 2, 00:14:19.590 "base_bdevs_list": [ 00:14:19.590 { 00:14:19.590 "name": null, 00:14:19.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.590 "is_configured": false, 00:14:19.590 "data_offset": 0, 00:14:19.590 "data_size": 63488 00:14:19.590 }, 00:14:19.590 { 00:14:19.590 "name": "BaseBdev2", 00:14:19.590 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:19.590 "is_configured": true, 00:14:19.590 "data_offset": 2048, 00:14:19.590 "data_size": 63488 00:14:19.590 }, 00:14:19.590 { 00:14:19.590 "name": "BaseBdev3", 00:14:19.590 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:19.590 "is_configured": true, 00:14:19.590 "data_offset": 2048, 00:14:19.590 "data_size": 63488 00:14:19.590 } 00:14:19.590 ] 00:14:19.590 }' 00:14:19.590 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.590 23:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.856 "name": "raid_bdev1", 00:14:19.856 "uuid": "2d5fd2f7-d1eb-40c8-a179-ea33df8fbd5c", 00:14:19.856 "strip_size_kb": 64, 00:14:19.856 "state": "online", 00:14:19.856 "raid_level": "raid5f", 00:14:19.856 "superblock": true, 00:14:19.856 "num_base_bdevs": 3, 00:14:19.856 "num_base_bdevs_discovered": 2, 00:14:19.856 "num_base_bdevs_operational": 2, 00:14:19.856 "base_bdevs_list": [ 00:14:19.856 { 00:14:19.856 "name": null, 00:14:19.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.856 "is_configured": false, 00:14:19.856 "data_offset": 0, 00:14:19.856 "data_size": 63488 00:14:19.856 }, 00:14:19.856 { 00:14:19.856 "name": "BaseBdev2", 00:14:19.856 "uuid": "6c0621ee-ab58-5e3e-9f0c-34de3030daee", 00:14:19.856 "is_configured": true, 00:14:19.856 "data_offset": 2048, 00:14:19.856 "data_size": 63488 00:14:19.856 }, 00:14:19.856 { 00:14:19.856 "name": "BaseBdev3", 00:14:19.856 "uuid": "e6dfcfaa-0b1a-57bb-8bf1-84a06f5ffd45", 00:14:19.856 "is_configured": true, 00:14:19.856 "data_offset": 2048, 00:14:19.856 "data_size": 63488 00:14:19.856 } 00:14:19.856 ] 00:14:19.856 }' 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.856 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.115 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.115 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92470 00:14:20.115 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92470 ']' 00:14:20.115 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92470 00:14:20.115 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:20.115 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.115 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92470 00:14:20.115 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:20.115 killing process with pid 92470 00:14:20.115 Received shutdown signal, test time was about 60.000000 seconds 00:14:20.115 00:14:20.115 Latency(us) 00:14:20.115 [2024-11-18T23:09:39.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.115 [2024-11-18T23:09:39.493Z] =================================================================================================================== 00:14:20.115 [2024-11-18T23:09:39.494Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:20.116 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:20.116 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92470' 00:14:20.116 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92470 00:14:20.116 [2024-11-18 23:09:39.284880] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:20.116 [2024-11-18 23:09:39.284988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.116 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92470 00:14:20.116 [2024-11-18 23:09:39.285049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.116 [2024-11-18 23:09:39.285058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:20.116 [2024-11-18 23:09:39.324959] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.376 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:20.376 00:14:20.376 real 0m21.736s 00:14:20.376 user 0m28.352s 00:14:20.376 sys 0m2.816s 00:14:20.376 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.376 23:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.376 ************************************ 00:14:20.376 END TEST raid5f_rebuild_test_sb 00:14:20.376 ************************************ 00:14:20.376 23:09:39 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:20.376 23:09:39 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:20.376 23:09:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:20.376 23:09:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.376 23:09:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:20.376 ************************************ 00:14:20.376 START TEST raid5f_state_function_test 00:14:20.376 ************************************ 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93205 00:14:20.376 Process raid pid: 93205 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93205' 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93205 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93205 ']' 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.376 23:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.376 [2024-11-18 23:09:39.738225] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:20.376 [2024-11-18 23:09:39.738386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.636 [2024-11-18 23:09:39.901832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.636 [2024-11-18 23:09:39.948955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.636 [2024-11-18 23:09:39.991656] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.636 [2024-11-18 23:09:39.991769] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.238 [2024-11-18 23:09:40.553650] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.238 [2024-11-18 23:09:40.553697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.238 [2024-11-18 23:09:40.553715] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.238 [2024-11-18 23:09:40.553724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.238 [2024-11-18 23:09:40.553730] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:21.238 [2024-11-18 23:09:40.553742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:21.238 [2024-11-18 23:09:40.553748] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:21.238 [2024-11-18 23:09:40.553756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.238 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.539 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.539 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.539 "name": "Existed_Raid", 00:14:21.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.540 "strip_size_kb": 64, 00:14:21.540 "state": "configuring", 00:14:21.540 "raid_level": "raid5f", 00:14:21.540 "superblock": false, 00:14:21.540 "num_base_bdevs": 4, 00:14:21.540 "num_base_bdevs_discovered": 0, 00:14:21.540 "num_base_bdevs_operational": 4, 00:14:21.540 "base_bdevs_list": [ 00:14:21.540 { 00:14:21.540 "name": "BaseBdev1", 00:14:21.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.540 "is_configured": false, 00:14:21.540 "data_offset": 0, 00:14:21.540 "data_size": 0 00:14:21.540 }, 00:14:21.540 { 00:14:21.540 "name": "BaseBdev2", 00:14:21.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.540 "is_configured": false, 00:14:21.540 "data_offset": 0, 00:14:21.540 "data_size": 0 00:14:21.540 }, 00:14:21.540 { 00:14:21.540 "name": "BaseBdev3", 00:14:21.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.540 "is_configured": false, 00:14:21.540 "data_offset": 0, 00:14:21.540 "data_size": 0 00:14:21.540 }, 00:14:21.540 { 00:14:21.540 "name": "BaseBdev4", 00:14:21.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.540 "is_configured": false, 00:14:21.540 "data_offset": 0, 00:14:21.540 "data_size": 0 00:14:21.540 } 00:14:21.540 ] 00:14:21.540 }' 00:14:21.540 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.540 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.800 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:21.800 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.800 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.800 [2024-11-18 23:09:40.992830] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.800 [2024-11-18 23:09:40.992908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:21.800 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.800 23:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:21.800 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.800 23:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.800 [2024-11-18 23:09:41.004845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.800 [2024-11-18 23:09:41.004920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.800 [2024-11-18 23:09:41.004945] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.800 [2024-11-18 23:09:41.004966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.800 [2024-11-18 23:09:41.004983] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:21.800 [2024-11-18 23:09:41.005002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:21.800 [2024-11-18 23:09:41.005019] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:21.801 [2024-11-18 23:09:41.005038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.801 [2024-11-18 23:09:41.025603] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.801 BaseBdev1 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.801 [ 00:14:21.801 { 00:14:21.801 "name": "BaseBdev1", 00:14:21.801 "aliases": [ 00:14:21.801 "e94b5a83-02e9-4a2a-b275-8fe5599dbd35" 00:14:21.801 ], 00:14:21.801 "product_name": "Malloc disk", 00:14:21.801 "block_size": 512, 00:14:21.801 "num_blocks": 65536, 00:14:21.801 "uuid": "e94b5a83-02e9-4a2a-b275-8fe5599dbd35", 00:14:21.801 "assigned_rate_limits": { 00:14:21.801 "rw_ios_per_sec": 0, 00:14:21.801 "rw_mbytes_per_sec": 0, 00:14:21.801 "r_mbytes_per_sec": 0, 00:14:21.801 "w_mbytes_per_sec": 0 00:14:21.801 }, 00:14:21.801 "claimed": true, 00:14:21.801 "claim_type": "exclusive_write", 00:14:21.801 "zoned": false, 00:14:21.801 "supported_io_types": { 00:14:21.801 "read": true, 00:14:21.801 "write": true, 00:14:21.801 "unmap": true, 00:14:21.801 "flush": true, 00:14:21.801 "reset": true, 00:14:21.801 "nvme_admin": false, 00:14:21.801 "nvme_io": false, 00:14:21.801 "nvme_io_md": false, 00:14:21.801 "write_zeroes": true, 00:14:21.801 "zcopy": true, 00:14:21.801 "get_zone_info": false, 00:14:21.801 "zone_management": false, 00:14:21.801 "zone_append": false, 00:14:21.801 "compare": false, 00:14:21.801 "compare_and_write": false, 00:14:21.801 "abort": true, 00:14:21.801 "seek_hole": false, 00:14:21.801 "seek_data": false, 00:14:21.801 "copy": true, 00:14:21.801 "nvme_iov_md": false 00:14:21.801 }, 00:14:21.801 "memory_domains": [ 00:14:21.801 { 00:14:21.801 "dma_device_id": "system", 00:14:21.801 "dma_device_type": 1 00:14:21.801 }, 00:14:21.801 { 00:14:21.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.801 "dma_device_type": 2 00:14:21.801 } 00:14:21.801 ], 00:14:21.801 "driver_specific": {} 00:14:21.801 } 00:14:21.801 ] 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.801 "name": "Existed_Raid", 00:14:21.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.801 "strip_size_kb": 64, 00:14:21.801 "state": "configuring", 00:14:21.801 "raid_level": "raid5f", 00:14:21.801 "superblock": false, 00:14:21.801 "num_base_bdevs": 4, 00:14:21.801 "num_base_bdevs_discovered": 1, 00:14:21.801 "num_base_bdevs_operational": 4, 00:14:21.801 "base_bdevs_list": [ 00:14:21.801 { 00:14:21.801 "name": "BaseBdev1", 00:14:21.801 "uuid": "e94b5a83-02e9-4a2a-b275-8fe5599dbd35", 00:14:21.801 "is_configured": true, 00:14:21.801 "data_offset": 0, 00:14:21.801 "data_size": 65536 00:14:21.801 }, 00:14:21.801 { 00:14:21.801 "name": "BaseBdev2", 00:14:21.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.801 "is_configured": false, 00:14:21.801 "data_offset": 0, 00:14:21.801 "data_size": 0 00:14:21.801 }, 00:14:21.801 { 00:14:21.801 "name": "BaseBdev3", 00:14:21.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.801 "is_configured": false, 00:14:21.801 "data_offset": 0, 00:14:21.801 "data_size": 0 00:14:21.801 }, 00:14:21.801 { 00:14:21.801 "name": "BaseBdev4", 00:14:21.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.801 "is_configured": false, 00:14:21.801 "data_offset": 0, 00:14:21.801 "data_size": 0 00:14:21.801 } 00:14:21.801 ] 00:14:21.801 }' 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.801 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.370 [2024-11-18 23:09:41.500813] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.370 [2024-11-18 23:09:41.500912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.370 [2024-11-18 23:09:41.512828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.370 [2024-11-18 23:09:41.514646] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.370 [2024-11-18 23:09:41.514712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.370 [2024-11-18 23:09:41.514753] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:22.370 [2024-11-18 23:09:41.514774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:22.370 [2024-11-18 23:09:41.514792] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:22.370 [2024-11-18 23:09:41.514811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.370 "name": "Existed_Raid", 00:14:22.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.370 "strip_size_kb": 64, 00:14:22.370 "state": "configuring", 00:14:22.370 "raid_level": "raid5f", 00:14:22.370 "superblock": false, 00:14:22.370 "num_base_bdevs": 4, 00:14:22.370 "num_base_bdevs_discovered": 1, 00:14:22.370 "num_base_bdevs_operational": 4, 00:14:22.370 "base_bdevs_list": [ 00:14:22.370 { 00:14:22.370 "name": "BaseBdev1", 00:14:22.370 "uuid": "e94b5a83-02e9-4a2a-b275-8fe5599dbd35", 00:14:22.370 "is_configured": true, 00:14:22.370 "data_offset": 0, 00:14:22.370 "data_size": 65536 00:14:22.370 }, 00:14:22.370 { 00:14:22.370 "name": "BaseBdev2", 00:14:22.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.370 "is_configured": false, 00:14:22.370 "data_offset": 0, 00:14:22.370 "data_size": 0 00:14:22.370 }, 00:14:22.370 { 00:14:22.370 "name": "BaseBdev3", 00:14:22.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.370 "is_configured": false, 00:14:22.370 "data_offset": 0, 00:14:22.370 "data_size": 0 00:14:22.370 }, 00:14:22.370 { 00:14:22.370 "name": "BaseBdev4", 00:14:22.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.370 "is_configured": false, 00:14:22.370 "data_offset": 0, 00:14:22.370 "data_size": 0 00:14:22.370 } 00:14:22.370 ] 00:14:22.370 }' 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.370 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.630 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:22.630 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.630 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.630 [2024-11-18 23:09:41.995124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.630 BaseBdev2 00:14:22.630 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.630 23:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:22.630 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:22.630 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:22.630 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:22.630 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:22.631 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:22.631 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:22.631 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.631 23:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.891 [ 00:14:22.891 { 00:14:22.891 "name": "BaseBdev2", 00:14:22.891 "aliases": [ 00:14:22.891 "10cd021a-cd17-43dd-97c5-51221cd1f1e0" 00:14:22.891 ], 00:14:22.891 "product_name": "Malloc disk", 00:14:22.891 "block_size": 512, 00:14:22.891 "num_blocks": 65536, 00:14:22.891 "uuid": "10cd021a-cd17-43dd-97c5-51221cd1f1e0", 00:14:22.891 "assigned_rate_limits": { 00:14:22.891 "rw_ios_per_sec": 0, 00:14:22.891 "rw_mbytes_per_sec": 0, 00:14:22.891 "r_mbytes_per_sec": 0, 00:14:22.891 "w_mbytes_per_sec": 0 00:14:22.891 }, 00:14:22.891 "claimed": true, 00:14:22.891 "claim_type": "exclusive_write", 00:14:22.891 "zoned": false, 00:14:22.891 "supported_io_types": { 00:14:22.891 "read": true, 00:14:22.891 "write": true, 00:14:22.891 "unmap": true, 00:14:22.891 "flush": true, 00:14:22.891 "reset": true, 00:14:22.891 "nvme_admin": false, 00:14:22.891 "nvme_io": false, 00:14:22.891 "nvme_io_md": false, 00:14:22.891 "write_zeroes": true, 00:14:22.891 "zcopy": true, 00:14:22.891 "get_zone_info": false, 00:14:22.891 "zone_management": false, 00:14:22.891 "zone_append": false, 00:14:22.891 "compare": false, 00:14:22.891 "compare_and_write": false, 00:14:22.891 "abort": true, 00:14:22.891 "seek_hole": false, 00:14:22.891 "seek_data": false, 00:14:22.891 "copy": true, 00:14:22.891 "nvme_iov_md": false 00:14:22.891 }, 00:14:22.891 "memory_domains": [ 00:14:22.891 { 00:14:22.891 "dma_device_id": "system", 00:14:22.891 "dma_device_type": 1 00:14:22.891 }, 00:14:22.891 { 00:14:22.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.891 "dma_device_type": 2 00:14:22.891 } 00:14:22.891 ], 00:14:22.891 "driver_specific": {} 00:14:22.891 } 00:14:22.891 ] 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.891 "name": "Existed_Raid", 00:14:22.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.891 "strip_size_kb": 64, 00:14:22.891 "state": "configuring", 00:14:22.891 "raid_level": "raid5f", 00:14:22.891 "superblock": false, 00:14:22.891 "num_base_bdevs": 4, 00:14:22.891 "num_base_bdevs_discovered": 2, 00:14:22.891 "num_base_bdevs_operational": 4, 00:14:22.891 "base_bdevs_list": [ 00:14:22.891 { 00:14:22.891 "name": "BaseBdev1", 00:14:22.891 "uuid": "e94b5a83-02e9-4a2a-b275-8fe5599dbd35", 00:14:22.891 "is_configured": true, 00:14:22.891 "data_offset": 0, 00:14:22.891 "data_size": 65536 00:14:22.891 }, 00:14:22.891 { 00:14:22.891 "name": "BaseBdev2", 00:14:22.891 "uuid": "10cd021a-cd17-43dd-97c5-51221cd1f1e0", 00:14:22.891 "is_configured": true, 00:14:22.891 "data_offset": 0, 00:14:22.891 "data_size": 65536 00:14:22.891 }, 00:14:22.891 { 00:14:22.891 "name": "BaseBdev3", 00:14:22.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.891 "is_configured": false, 00:14:22.891 "data_offset": 0, 00:14:22.891 "data_size": 0 00:14:22.891 }, 00:14:22.891 { 00:14:22.891 "name": "BaseBdev4", 00:14:22.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.891 "is_configured": false, 00:14:22.891 "data_offset": 0, 00:14:22.891 "data_size": 0 00:14:22.891 } 00:14:22.891 ] 00:14:22.891 }' 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.891 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.151 BaseBdev3 00:14:23.151 [2024-11-18 23:09:42.493122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.151 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.151 [ 00:14:23.151 { 00:14:23.151 "name": "BaseBdev3", 00:14:23.151 "aliases": [ 00:14:23.151 "94d141cf-53bc-475a-a351-4b7de3acfa92" 00:14:23.151 ], 00:14:23.151 "product_name": "Malloc disk", 00:14:23.151 "block_size": 512, 00:14:23.151 "num_blocks": 65536, 00:14:23.151 "uuid": "94d141cf-53bc-475a-a351-4b7de3acfa92", 00:14:23.151 "assigned_rate_limits": { 00:14:23.151 "rw_ios_per_sec": 0, 00:14:23.151 "rw_mbytes_per_sec": 0, 00:14:23.151 "r_mbytes_per_sec": 0, 00:14:23.151 "w_mbytes_per_sec": 0 00:14:23.151 }, 00:14:23.151 "claimed": true, 00:14:23.151 "claim_type": "exclusive_write", 00:14:23.151 "zoned": false, 00:14:23.152 "supported_io_types": { 00:14:23.152 "read": true, 00:14:23.152 "write": true, 00:14:23.152 "unmap": true, 00:14:23.152 "flush": true, 00:14:23.152 "reset": true, 00:14:23.152 "nvme_admin": false, 00:14:23.152 "nvme_io": false, 00:14:23.152 "nvme_io_md": false, 00:14:23.152 "write_zeroes": true, 00:14:23.152 "zcopy": true, 00:14:23.152 "get_zone_info": false, 00:14:23.152 "zone_management": false, 00:14:23.152 "zone_append": false, 00:14:23.152 "compare": false, 00:14:23.412 "compare_and_write": false, 00:14:23.412 "abort": true, 00:14:23.412 "seek_hole": false, 00:14:23.412 "seek_data": false, 00:14:23.412 "copy": true, 00:14:23.412 "nvme_iov_md": false 00:14:23.412 }, 00:14:23.412 "memory_domains": [ 00:14:23.412 { 00:14:23.412 "dma_device_id": "system", 00:14:23.412 "dma_device_type": 1 00:14:23.412 }, 00:14:23.412 { 00:14:23.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.412 "dma_device_type": 2 00:14:23.412 } 00:14:23.412 ], 00:14:23.412 "driver_specific": {} 00:14:23.412 } 00:14:23.412 ] 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.412 "name": "Existed_Raid", 00:14:23.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.412 "strip_size_kb": 64, 00:14:23.412 "state": "configuring", 00:14:23.412 "raid_level": "raid5f", 00:14:23.412 "superblock": false, 00:14:23.412 "num_base_bdevs": 4, 00:14:23.412 "num_base_bdevs_discovered": 3, 00:14:23.412 "num_base_bdevs_operational": 4, 00:14:23.412 "base_bdevs_list": [ 00:14:23.412 { 00:14:23.412 "name": "BaseBdev1", 00:14:23.412 "uuid": "e94b5a83-02e9-4a2a-b275-8fe5599dbd35", 00:14:23.412 "is_configured": true, 00:14:23.412 "data_offset": 0, 00:14:23.412 "data_size": 65536 00:14:23.412 }, 00:14:23.412 { 00:14:23.412 "name": "BaseBdev2", 00:14:23.412 "uuid": "10cd021a-cd17-43dd-97c5-51221cd1f1e0", 00:14:23.412 "is_configured": true, 00:14:23.412 "data_offset": 0, 00:14:23.412 "data_size": 65536 00:14:23.412 }, 00:14:23.412 { 00:14:23.412 "name": "BaseBdev3", 00:14:23.412 "uuid": "94d141cf-53bc-475a-a351-4b7de3acfa92", 00:14:23.412 "is_configured": true, 00:14:23.412 "data_offset": 0, 00:14:23.412 "data_size": 65536 00:14:23.412 }, 00:14:23.412 { 00:14:23.412 "name": "BaseBdev4", 00:14:23.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.412 "is_configured": false, 00:14:23.412 "data_offset": 0, 00:14:23.412 "data_size": 0 00:14:23.412 } 00:14:23.412 ] 00:14:23.412 }' 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.412 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.673 23:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:23.673 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.673 23:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.673 [2024-11-18 23:09:42.999169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:23.673 [2024-11-18 23:09:42.999220] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:23.673 [2024-11-18 23:09:42.999227] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:23.673 [2024-11-18 23:09:42.999525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:23.673 [2024-11-18 23:09:43.000000] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:23.673 [2024-11-18 23:09:43.000022] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:23.673 [2024-11-18 23:09:43.000219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.673 BaseBdev4 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.673 [ 00:14:23.673 { 00:14:23.673 "name": "BaseBdev4", 00:14:23.673 "aliases": [ 00:14:23.673 "f74e7ac5-0300-4ad1-8470-a057e508731e" 00:14:23.673 ], 00:14:23.673 "product_name": "Malloc disk", 00:14:23.673 "block_size": 512, 00:14:23.673 "num_blocks": 65536, 00:14:23.673 "uuid": "f74e7ac5-0300-4ad1-8470-a057e508731e", 00:14:23.673 "assigned_rate_limits": { 00:14:23.673 "rw_ios_per_sec": 0, 00:14:23.673 "rw_mbytes_per_sec": 0, 00:14:23.673 "r_mbytes_per_sec": 0, 00:14:23.673 "w_mbytes_per_sec": 0 00:14:23.673 }, 00:14:23.673 "claimed": true, 00:14:23.673 "claim_type": "exclusive_write", 00:14:23.673 "zoned": false, 00:14:23.673 "supported_io_types": { 00:14:23.673 "read": true, 00:14:23.673 "write": true, 00:14:23.673 "unmap": true, 00:14:23.673 "flush": true, 00:14:23.673 "reset": true, 00:14:23.673 "nvme_admin": false, 00:14:23.673 "nvme_io": false, 00:14:23.673 "nvme_io_md": false, 00:14:23.673 "write_zeroes": true, 00:14:23.673 "zcopy": true, 00:14:23.673 "get_zone_info": false, 00:14:23.673 "zone_management": false, 00:14:23.673 "zone_append": false, 00:14:23.673 "compare": false, 00:14:23.673 "compare_and_write": false, 00:14:23.673 "abort": true, 00:14:23.673 "seek_hole": false, 00:14:23.673 "seek_data": false, 00:14:23.673 "copy": true, 00:14:23.673 "nvme_iov_md": false 00:14:23.673 }, 00:14:23.673 "memory_domains": [ 00:14:23.673 { 00:14:23.673 "dma_device_id": "system", 00:14:23.673 "dma_device_type": 1 00:14:23.673 }, 00:14:23.673 { 00:14:23.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.673 "dma_device_type": 2 00:14:23.673 } 00:14:23.673 ], 00:14:23.673 "driver_specific": {} 00:14:23.673 } 00:14:23.673 ] 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.673 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.934 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.934 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.934 "name": "Existed_Raid", 00:14:23.934 "uuid": "a25bf903-7367-404d-8dc5-b0d81fd672bc", 00:14:23.934 "strip_size_kb": 64, 00:14:23.934 "state": "online", 00:14:23.934 "raid_level": "raid5f", 00:14:23.934 "superblock": false, 00:14:23.934 "num_base_bdevs": 4, 00:14:23.934 "num_base_bdevs_discovered": 4, 00:14:23.934 "num_base_bdevs_operational": 4, 00:14:23.934 "base_bdevs_list": [ 00:14:23.934 { 00:14:23.934 "name": "BaseBdev1", 00:14:23.934 "uuid": "e94b5a83-02e9-4a2a-b275-8fe5599dbd35", 00:14:23.934 "is_configured": true, 00:14:23.934 "data_offset": 0, 00:14:23.934 "data_size": 65536 00:14:23.934 }, 00:14:23.934 { 00:14:23.934 "name": "BaseBdev2", 00:14:23.934 "uuid": "10cd021a-cd17-43dd-97c5-51221cd1f1e0", 00:14:23.934 "is_configured": true, 00:14:23.934 "data_offset": 0, 00:14:23.934 "data_size": 65536 00:14:23.934 }, 00:14:23.934 { 00:14:23.934 "name": "BaseBdev3", 00:14:23.934 "uuid": "94d141cf-53bc-475a-a351-4b7de3acfa92", 00:14:23.934 "is_configured": true, 00:14:23.934 "data_offset": 0, 00:14:23.934 "data_size": 65536 00:14:23.934 }, 00:14:23.934 { 00:14:23.934 "name": "BaseBdev4", 00:14:23.934 "uuid": "f74e7ac5-0300-4ad1-8470-a057e508731e", 00:14:23.934 "is_configured": true, 00:14:23.934 "data_offset": 0, 00:14:23.934 "data_size": 65536 00:14:23.934 } 00:14:23.934 ] 00:14:23.934 }' 00:14:23.934 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.934 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.194 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:24.194 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:24.194 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:24.195 [2024-11-18 23:09:43.450600] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:24.195 "name": "Existed_Raid", 00:14:24.195 "aliases": [ 00:14:24.195 "a25bf903-7367-404d-8dc5-b0d81fd672bc" 00:14:24.195 ], 00:14:24.195 "product_name": "Raid Volume", 00:14:24.195 "block_size": 512, 00:14:24.195 "num_blocks": 196608, 00:14:24.195 "uuid": "a25bf903-7367-404d-8dc5-b0d81fd672bc", 00:14:24.195 "assigned_rate_limits": { 00:14:24.195 "rw_ios_per_sec": 0, 00:14:24.195 "rw_mbytes_per_sec": 0, 00:14:24.195 "r_mbytes_per_sec": 0, 00:14:24.195 "w_mbytes_per_sec": 0 00:14:24.195 }, 00:14:24.195 "claimed": false, 00:14:24.195 "zoned": false, 00:14:24.195 "supported_io_types": { 00:14:24.195 "read": true, 00:14:24.195 "write": true, 00:14:24.195 "unmap": false, 00:14:24.195 "flush": false, 00:14:24.195 "reset": true, 00:14:24.195 "nvme_admin": false, 00:14:24.195 "nvme_io": false, 00:14:24.195 "nvme_io_md": false, 00:14:24.195 "write_zeroes": true, 00:14:24.195 "zcopy": false, 00:14:24.195 "get_zone_info": false, 00:14:24.195 "zone_management": false, 00:14:24.195 "zone_append": false, 00:14:24.195 "compare": false, 00:14:24.195 "compare_and_write": false, 00:14:24.195 "abort": false, 00:14:24.195 "seek_hole": false, 00:14:24.195 "seek_data": false, 00:14:24.195 "copy": false, 00:14:24.195 "nvme_iov_md": false 00:14:24.195 }, 00:14:24.195 "driver_specific": { 00:14:24.195 "raid": { 00:14:24.195 "uuid": "a25bf903-7367-404d-8dc5-b0d81fd672bc", 00:14:24.195 "strip_size_kb": 64, 00:14:24.195 "state": "online", 00:14:24.195 "raid_level": "raid5f", 00:14:24.195 "superblock": false, 00:14:24.195 "num_base_bdevs": 4, 00:14:24.195 "num_base_bdevs_discovered": 4, 00:14:24.195 "num_base_bdevs_operational": 4, 00:14:24.195 "base_bdevs_list": [ 00:14:24.195 { 00:14:24.195 "name": "BaseBdev1", 00:14:24.195 "uuid": "e94b5a83-02e9-4a2a-b275-8fe5599dbd35", 00:14:24.195 "is_configured": true, 00:14:24.195 "data_offset": 0, 00:14:24.195 "data_size": 65536 00:14:24.195 }, 00:14:24.195 { 00:14:24.195 "name": "BaseBdev2", 00:14:24.195 "uuid": "10cd021a-cd17-43dd-97c5-51221cd1f1e0", 00:14:24.195 "is_configured": true, 00:14:24.195 "data_offset": 0, 00:14:24.195 "data_size": 65536 00:14:24.195 }, 00:14:24.195 { 00:14:24.195 "name": "BaseBdev3", 00:14:24.195 "uuid": "94d141cf-53bc-475a-a351-4b7de3acfa92", 00:14:24.195 "is_configured": true, 00:14:24.195 "data_offset": 0, 00:14:24.195 "data_size": 65536 00:14:24.195 }, 00:14:24.195 { 00:14:24.195 "name": "BaseBdev4", 00:14:24.195 "uuid": "f74e7ac5-0300-4ad1-8470-a057e508731e", 00:14:24.195 "is_configured": true, 00:14:24.195 "data_offset": 0, 00:14:24.195 "data_size": 65536 00:14:24.195 } 00:14:24.195 ] 00:14:24.195 } 00:14:24.195 } 00:14:24.195 }' 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:24.195 BaseBdev2 00:14:24.195 BaseBdev3 00:14:24.195 BaseBdev4' 00:14:24.195 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.455 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:24.455 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.455 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.455 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:24.455 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.455 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.455 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.455 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.455 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.455 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.455 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.456 [2024-11-18 23:09:43.769911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.456 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.716 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.716 "name": "Existed_Raid", 00:14:24.716 "uuid": "a25bf903-7367-404d-8dc5-b0d81fd672bc", 00:14:24.716 "strip_size_kb": 64, 00:14:24.716 "state": "online", 00:14:24.716 "raid_level": "raid5f", 00:14:24.716 "superblock": false, 00:14:24.716 "num_base_bdevs": 4, 00:14:24.716 "num_base_bdevs_discovered": 3, 00:14:24.716 "num_base_bdevs_operational": 3, 00:14:24.716 "base_bdevs_list": [ 00:14:24.716 { 00:14:24.716 "name": null, 00:14:24.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.716 "is_configured": false, 00:14:24.716 "data_offset": 0, 00:14:24.716 "data_size": 65536 00:14:24.716 }, 00:14:24.716 { 00:14:24.716 "name": "BaseBdev2", 00:14:24.716 "uuid": "10cd021a-cd17-43dd-97c5-51221cd1f1e0", 00:14:24.716 "is_configured": true, 00:14:24.716 "data_offset": 0, 00:14:24.716 "data_size": 65536 00:14:24.716 }, 00:14:24.716 { 00:14:24.716 "name": "BaseBdev3", 00:14:24.716 "uuid": "94d141cf-53bc-475a-a351-4b7de3acfa92", 00:14:24.716 "is_configured": true, 00:14:24.716 "data_offset": 0, 00:14:24.716 "data_size": 65536 00:14:24.716 }, 00:14:24.716 { 00:14:24.716 "name": "BaseBdev4", 00:14:24.716 "uuid": "f74e7ac5-0300-4ad1-8470-a057e508731e", 00:14:24.716 "is_configured": true, 00:14:24.716 "data_offset": 0, 00:14:24.716 "data_size": 65536 00:14:24.716 } 00:14:24.716 ] 00:14:24.716 }' 00:14:24.716 23:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.716 23:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.976 [2024-11-18 23:09:44.256335] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:24.976 [2024-11-18 23:09:44.256422] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.976 [2024-11-18 23:09:44.267237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.976 [2024-11-18 23:09:44.327155] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.976 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.236 [2024-11-18 23:09:44.397800] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:25.236 [2024-11-18 23:09:44.397839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.236 BaseBdev2 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:25.236 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.237 [ 00:14:25.237 { 00:14:25.237 "name": "BaseBdev2", 00:14:25.237 "aliases": [ 00:14:25.237 "61787d9f-a770-45f0-bcb2-31bc1e519ac0" 00:14:25.237 ], 00:14:25.237 "product_name": "Malloc disk", 00:14:25.237 "block_size": 512, 00:14:25.237 "num_blocks": 65536, 00:14:25.237 "uuid": "61787d9f-a770-45f0-bcb2-31bc1e519ac0", 00:14:25.237 "assigned_rate_limits": { 00:14:25.237 "rw_ios_per_sec": 0, 00:14:25.237 "rw_mbytes_per_sec": 0, 00:14:25.237 "r_mbytes_per_sec": 0, 00:14:25.237 "w_mbytes_per_sec": 0 00:14:25.237 }, 00:14:25.237 "claimed": false, 00:14:25.237 "zoned": false, 00:14:25.237 "supported_io_types": { 00:14:25.237 "read": true, 00:14:25.237 "write": true, 00:14:25.237 "unmap": true, 00:14:25.237 "flush": true, 00:14:25.237 "reset": true, 00:14:25.237 "nvme_admin": false, 00:14:25.237 "nvme_io": false, 00:14:25.237 "nvme_io_md": false, 00:14:25.237 "write_zeroes": true, 00:14:25.237 "zcopy": true, 00:14:25.237 "get_zone_info": false, 00:14:25.237 "zone_management": false, 00:14:25.237 "zone_append": false, 00:14:25.237 "compare": false, 00:14:25.237 "compare_and_write": false, 00:14:25.237 "abort": true, 00:14:25.237 "seek_hole": false, 00:14:25.237 "seek_data": false, 00:14:25.237 "copy": true, 00:14:25.237 "nvme_iov_md": false 00:14:25.237 }, 00:14:25.237 "memory_domains": [ 00:14:25.237 { 00:14:25.237 "dma_device_id": "system", 00:14:25.237 "dma_device_type": 1 00:14:25.237 }, 00:14:25.237 { 00:14:25.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.237 "dma_device_type": 2 00:14:25.237 } 00:14:25.237 ], 00:14:25.237 "driver_specific": {} 00:14:25.237 } 00:14:25.237 ] 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.237 BaseBdev3 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.237 [ 00:14:25.237 { 00:14:25.237 "name": "BaseBdev3", 00:14:25.237 "aliases": [ 00:14:25.237 "3ba3c056-6f18-4d29-943c-e6c8f42cc408" 00:14:25.237 ], 00:14:25.237 "product_name": "Malloc disk", 00:14:25.237 "block_size": 512, 00:14:25.237 "num_blocks": 65536, 00:14:25.237 "uuid": "3ba3c056-6f18-4d29-943c-e6c8f42cc408", 00:14:25.237 "assigned_rate_limits": { 00:14:25.237 "rw_ios_per_sec": 0, 00:14:25.237 "rw_mbytes_per_sec": 0, 00:14:25.237 "r_mbytes_per_sec": 0, 00:14:25.237 "w_mbytes_per_sec": 0 00:14:25.237 }, 00:14:25.237 "claimed": false, 00:14:25.237 "zoned": false, 00:14:25.237 "supported_io_types": { 00:14:25.237 "read": true, 00:14:25.237 "write": true, 00:14:25.237 "unmap": true, 00:14:25.237 "flush": true, 00:14:25.237 "reset": true, 00:14:25.237 "nvme_admin": false, 00:14:25.237 "nvme_io": false, 00:14:25.237 "nvme_io_md": false, 00:14:25.237 "write_zeroes": true, 00:14:25.237 "zcopy": true, 00:14:25.237 "get_zone_info": false, 00:14:25.237 "zone_management": false, 00:14:25.237 "zone_append": false, 00:14:25.237 "compare": false, 00:14:25.237 "compare_and_write": false, 00:14:25.237 "abort": true, 00:14:25.237 "seek_hole": false, 00:14:25.237 "seek_data": false, 00:14:25.237 "copy": true, 00:14:25.237 "nvme_iov_md": false 00:14:25.237 }, 00:14:25.237 "memory_domains": [ 00:14:25.237 { 00:14:25.237 "dma_device_id": "system", 00:14:25.237 "dma_device_type": 1 00:14:25.237 }, 00:14:25.237 { 00:14:25.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.237 "dma_device_type": 2 00:14:25.237 } 00:14:25.237 ], 00:14:25.237 "driver_specific": {} 00:14:25.237 } 00:14:25.237 ] 00:14:25.237 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.238 BaseBdev4 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.238 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.498 [ 00:14:25.498 { 00:14:25.498 "name": "BaseBdev4", 00:14:25.498 "aliases": [ 00:14:25.498 "33f68588-89d8-453a-bb88-64deb28ed13b" 00:14:25.498 ], 00:14:25.498 "product_name": "Malloc disk", 00:14:25.498 "block_size": 512, 00:14:25.498 "num_blocks": 65536, 00:14:25.498 "uuid": "33f68588-89d8-453a-bb88-64deb28ed13b", 00:14:25.498 "assigned_rate_limits": { 00:14:25.498 "rw_ios_per_sec": 0, 00:14:25.498 "rw_mbytes_per_sec": 0, 00:14:25.498 "r_mbytes_per_sec": 0, 00:14:25.498 "w_mbytes_per_sec": 0 00:14:25.498 }, 00:14:25.498 "claimed": false, 00:14:25.498 "zoned": false, 00:14:25.498 "supported_io_types": { 00:14:25.498 "read": true, 00:14:25.498 "write": true, 00:14:25.498 "unmap": true, 00:14:25.498 "flush": true, 00:14:25.498 "reset": true, 00:14:25.498 "nvme_admin": false, 00:14:25.498 "nvme_io": false, 00:14:25.498 "nvme_io_md": false, 00:14:25.498 "write_zeroes": true, 00:14:25.498 "zcopy": true, 00:14:25.498 "get_zone_info": false, 00:14:25.498 "zone_management": false, 00:14:25.498 "zone_append": false, 00:14:25.498 "compare": false, 00:14:25.498 "compare_and_write": false, 00:14:25.498 "abort": true, 00:14:25.498 "seek_hole": false, 00:14:25.498 "seek_data": false, 00:14:25.498 "copy": true, 00:14:25.498 "nvme_iov_md": false 00:14:25.498 }, 00:14:25.498 "memory_domains": [ 00:14:25.498 { 00:14:25.498 "dma_device_id": "system", 00:14:25.498 "dma_device_type": 1 00:14:25.498 }, 00:14:25.498 { 00:14:25.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.498 "dma_device_type": 2 00:14:25.498 } 00:14:25.498 ], 00:14:25.498 "driver_specific": {} 00:14:25.498 } 00:14:25.498 ] 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.498 [2024-11-18 23:09:44.653699] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.498 [2024-11-18 23:09:44.653782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.498 [2024-11-18 23:09:44.653837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.498 [2024-11-18 23:09:44.655659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.498 [2024-11-18 23:09:44.655744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.498 "name": "Existed_Raid", 00:14:25.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.498 "strip_size_kb": 64, 00:14:25.498 "state": "configuring", 00:14:25.498 "raid_level": "raid5f", 00:14:25.498 "superblock": false, 00:14:25.498 "num_base_bdevs": 4, 00:14:25.498 "num_base_bdevs_discovered": 3, 00:14:25.498 "num_base_bdevs_operational": 4, 00:14:25.498 "base_bdevs_list": [ 00:14:25.498 { 00:14:25.498 "name": "BaseBdev1", 00:14:25.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.498 "is_configured": false, 00:14:25.498 "data_offset": 0, 00:14:25.498 "data_size": 0 00:14:25.498 }, 00:14:25.498 { 00:14:25.498 "name": "BaseBdev2", 00:14:25.498 "uuid": "61787d9f-a770-45f0-bcb2-31bc1e519ac0", 00:14:25.498 "is_configured": true, 00:14:25.498 "data_offset": 0, 00:14:25.498 "data_size": 65536 00:14:25.498 }, 00:14:25.498 { 00:14:25.498 "name": "BaseBdev3", 00:14:25.498 "uuid": "3ba3c056-6f18-4d29-943c-e6c8f42cc408", 00:14:25.498 "is_configured": true, 00:14:25.498 "data_offset": 0, 00:14:25.498 "data_size": 65536 00:14:25.498 }, 00:14:25.498 { 00:14:25.498 "name": "BaseBdev4", 00:14:25.498 "uuid": "33f68588-89d8-453a-bb88-64deb28ed13b", 00:14:25.498 "is_configured": true, 00:14:25.498 "data_offset": 0, 00:14:25.498 "data_size": 65536 00:14:25.498 } 00:14:25.498 ] 00:14:25.498 }' 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.498 23:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.758 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:25.758 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.758 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.758 [2024-11-18 23:09:45.076945] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:25.758 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.758 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:25.758 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.758 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.758 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.758 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.759 "name": "Existed_Raid", 00:14:25.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.759 "strip_size_kb": 64, 00:14:25.759 "state": "configuring", 00:14:25.759 "raid_level": "raid5f", 00:14:25.759 "superblock": false, 00:14:25.759 "num_base_bdevs": 4, 00:14:25.759 "num_base_bdevs_discovered": 2, 00:14:25.759 "num_base_bdevs_operational": 4, 00:14:25.759 "base_bdevs_list": [ 00:14:25.759 { 00:14:25.759 "name": "BaseBdev1", 00:14:25.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.759 "is_configured": false, 00:14:25.759 "data_offset": 0, 00:14:25.759 "data_size": 0 00:14:25.759 }, 00:14:25.759 { 00:14:25.759 "name": null, 00:14:25.759 "uuid": "61787d9f-a770-45f0-bcb2-31bc1e519ac0", 00:14:25.759 "is_configured": false, 00:14:25.759 "data_offset": 0, 00:14:25.759 "data_size": 65536 00:14:25.759 }, 00:14:25.759 { 00:14:25.759 "name": "BaseBdev3", 00:14:25.759 "uuid": "3ba3c056-6f18-4d29-943c-e6c8f42cc408", 00:14:25.759 "is_configured": true, 00:14:25.759 "data_offset": 0, 00:14:25.759 "data_size": 65536 00:14:25.759 }, 00:14:25.759 { 00:14:25.759 "name": "BaseBdev4", 00:14:25.759 "uuid": "33f68588-89d8-453a-bb88-64deb28ed13b", 00:14:25.759 "is_configured": true, 00:14:25.759 "data_offset": 0, 00:14:25.759 "data_size": 65536 00:14:25.759 } 00:14:25.759 ] 00:14:25.759 }' 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.759 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.329 [2024-11-18 23:09:45.587041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.329 BaseBdev1 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.329 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.329 [ 00:14:26.329 { 00:14:26.329 "name": "BaseBdev1", 00:14:26.329 "aliases": [ 00:14:26.329 "3959f5aa-2cf5-40d1-bb84-436bffdb9c06" 00:14:26.330 ], 00:14:26.330 "product_name": "Malloc disk", 00:14:26.330 "block_size": 512, 00:14:26.330 "num_blocks": 65536, 00:14:26.330 "uuid": "3959f5aa-2cf5-40d1-bb84-436bffdb9c06", 00:14:26.330 "assigned_rate_limits": { 00:14:26.330 "rw_ios_per_sec": 0, 00:14:26.330 "rw_mbytes_per_sec": 0, 00:14:26.330 "r_mbytes_per_sec": 0, 00:14:26.330 "w_mbytes_per_sec": 0 00:14:26.330 }, 00:14:26.330 "claimed": true, 00:14:26.330 "claim_type": "exclusive_write", 00:14:26.330 "zoned": false, 00:14:26.330 "supported_io_types": { 00:14:26.330 "read": true, 00:14:26.330 "write": true, 00:14:26.330 "unmap": true, 00:14:26.330 "flush": true, 00:14:26.330 "reset": true, 00:14:26.330 "nvme_admin": false, 00:14:26.330 "nvme_io": false, 00:14:26.330 "nvme_io_md": false, 00:14:26.330 "write_zeroes": true, 00:14:26.330 "zcopy": true, 00:14:26.330 "get_zone_info": false, 00:14:26.330 "zone_management": false, 00:14:26.330 "zone_append": false, 00:14:26.330 "compare": false, 00:14:26.330 "compare_and_write": false, 00:14:26.330 "abort": true, 00:14:26.330 "seek_hole": false, 00:14:26.330 "seek_data": false, 00:14:26.330 "copy": true, 00:14:26.330 "nvme_iov_md": false 00:14:26.330 }, 00:14:26.330 "memory_domains": [ 00:14:26.330 { 00:14:26.330 "dma_device_id": "system", 00:14:26.330 "dma_device_type": 1 00:14:26.330 }, 00:14:26.330 { 00:14:26.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.330 "dma_device_type": 2 00:14:26.330 } 00:14:26.330 ], 00:14:26.330 "driver_specific": {} 00:14:26.330 } 00:14:26.330 ] 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.330 "name": "Existed_Raid", 00:14:26.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.330 "strip_size_kb": 64, 00:14:26.330 "state": "configuring", 00:14:26.330 "raid_level": "raid5f", 00:14:26.330 "superblock": false, 00:14:26.330 "num_base_bdevs": 4, 00:14:26.330 "num_base_bdevs_discovered": 3, 00:14:26.330 "num_base_bdevs_operational": 4, 00:14:26.330 "base_bdevs_list": [ 00:14:26.330 { 00:14:26.330 "name": "BaseBdev1", 00:14:26.330 "uuid": "3959f5aa-2cf5-40d1-bb84-436bffdb9c06", 00:14:26.330 "is_configured": true, 00:14:26.330 "data_offset": 0, 00:14:26.330 "data_size": 65536 00:14:26.330 }, 00:14:26.330 { 00:14:26.330 "name": null, 00:14:26.330 "uuid": "61787d9f-a770-45f0-bcb2-31bc1e519ac0", 00:14:26.330 "is_configured": false, 00:14:26.330 "data_offset": 0, 00:14:26.330 "data_size": 65536 00:14:26.330 }, 00:14:26.330 { 00:14:26.330 "name": "BaseBdev3", 00:14:26.330 "uuid": "3ba3c056-6f18-4d29-943c-e6c8f42cc408", 00:14:26.330 "is_configured": true, 00:14:26.330 "data_offset": 0, 00:14:26.330 "data_size": 65536 00:14:26.330 }, 00:14:26.330 { 00:14:26.330 "name": "BaseBdev4", 00:14:26.330 "uuid": "33f68588-89d8-453a-bb88-64deb28ed13b", 00:14:26.330 "is_configured": true, 00:14:26.330 "data_offset": 0, 00:14:26.330 "data_size": 65536 00:14:26.330 } 00:14:26.330 ] 00:14:26.330 }' 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.330 23:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.898 [2024-11-18 23:09:46.146092] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.898 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.898 "name": "Existed_Raid", 00:14:26.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.898 "strip_size_kb": 64, 00:14:26.898 "state": "configuring", 00:14:26.898 "raid_level": "raid5f", 00:14:26.898 "superblock": false, 00:14:26.898 "num_base_bdevs": 4, 00:14:26.898 "num_base_bdevs_discovered": 2, 00:14:26.898 "num_base_bdevs_operational": 4, 00:14:26.898 "base_bdevs_list": [ 00:14:26.898 { 00:14:26.898 "name": "BaseBdev1", 00:14:26.898 "uuid": "3959f5aa-2cf5-40d1-bb84-436bffdb9c06", 00:14:26.898 "is_configured": true, 00:14:26.898 "data_offset": 0, 00:14:26.898 "data_size": 65536 00:14:26.898 }, 00:14:26.898 { 00:14:26.898 "name": null, 00:14:26.899 "uuid": "61787d9f-a770-45f0-bcb2-31bc1e519ac0", 00:14:26.899 "is_configured": false, 00:14:26.899 "data_offset": 0, 00:14:26.899 "data_size": 65536 00:14:26.899 }, 00:14:26.899 { 00:14:26.899 "name": null, 00:14:26.899 "uuid": "3ba3c056-6f18-4d29-943c-e6c8f42cc408", 00:14:26.899 "is_configured": false, 00:14:26.899 "data_offset": 0, 00:14:26.899 "data_size": 65536 00:14:26.899 }, 00:14:26.899 { 00:14:26.899 "name": "BaseBdev4", 00:14:26.899 "uuid": "33f68588-89d8-453a-bb88-64deb28ed13b", 00:14:26.899 "is_configured": true, 00:14:26.899 "data_offset": 0, 00:14:26.899 "data_size": 65536 00:14:26.899 } 00:14:26.899 ] 00:14:26.899 }' 00:14:26.899 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.899 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.468 [2024-11-18 23:09:46.621349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.468 "name": "Existed_Raid", 00:14:27.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.468 "strip_size_kb": 64, 00:14:27.468 "state": "configuring", 00:14:27.468 "raid_level": "raid5f", 00:14:27.468 "superblock": false, 00:14:27.468 "num_base_bdevs": 4, 00:14:27.468 "num_base_bdevs_discovered": 3, 00:14:27.468 "num_base_bdevs_operational": 4, 00:14:27.468 "base_bdevs_list": [ 00:14:27.468 { 00:14:27.468 "name": "BaseBdev1", 00:14:27.468 "uuid": "3959f5aa-2cf5-40d1-bb84-436bffdb9c06", 00:14:27.468 "is_configured": true, 00:14:27.468 "data_offset": 0, 00:14:27.468 "data_size": 65536 00:14:27.468 }, 00:14:27.468 { 00:14:27.468 "name": null, 00:14:27.468 "uuid": "61787d9f-a770-45f0-bcb2-31bc1e519ac0", 00:14:27.468 "is_configured": false, 00:14:27.468 "data_offset": 0, 00:14:27.468 "data_size": 65536 00:14:27.468 }, 00:14:27.468 { 00:14:27.468 "name": "BaseBdev3", 00:14:27.468 "uuid": "3ba3c056-6f18-4d29-943c-e6c8f42cc408", 00:14:27.468 "is_configured": true, 00:14:27.468 "data_offset": 0, 00:14:27.468 "data_size": 65536 00:14:27.468 }, 00:14:27.468 { 00:14:27.468 "name": "BaseBdev4", 00:14:27.468 "uuid": "33f68588-89d8-453a-bb88-64deb28ed13b", 00:14:27.468 "is_configured": true, 00:14:27.468 "data_offset": 0, 00:14:27.468 "data_size": 65536 00:14:27.468 } 00:14:27.468 ] 00:14:27.468 }' 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.468 23:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.728 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.728 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.728 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.728 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:27.728 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.988 [2024-11-18 23:09:47.124481] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.988 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.989 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.989 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.989 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.989 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.989 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.989 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.989 "name": "Existed_Raid", 00:14:27.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.989 "strip_size_kb": 64, 00:14:27.989 "state": "configuring", 00:14:27.989 "raid_level": "raid5f", 00:14:27.989 "superblock": false, 00:14:27.989 "num_base_bdevs": 4, 00:14:27.989 "num_base_bdevs_discovered": 2, 00:14:27.989 "num_base_bdevs_operational": 4, 00:14:27.989 "base_bdevs_list": [ 00:14:27.989 { 00:14:27.989 "name": null, 00:14:27.989 "uuid": "3959f5aa-2cf5-40d1-bb84-436bffdb9c06", 00:14:27.989 "is_configured": false, 00:14:27.989 "data_offset": 0, 00:14:27.989 "data_size": 65536 00:14:27.989 }, 00:14:27.989 { 00:14:27.989 "name": null, 00:14:27.989 "uuid": "61787d9f-a770-45f0-bcb2-31bc1e519ac0", 00:14:27.989 "is_configured": false, 00:14:27.989 "data_offset": 0, 00:14:27.989 "data_size": 65536 00:14:27.989 }, 00:14:27.989 { 00:14:27.989 "name": "BaseBdev3", 00:14:27.989 "uuid": "3ba3c056-6f18-4d29-943c-e6c8f42cc408", 00:14:27.989 "is_configured": true, 00:14:27.989 "data_offset": 0, 00:14:27.989 "data_size": 65536 00:14:27.989 }, 00:14:27.989 { 00:14:27.989 "name": "BaseBdev4", 00:14:27.989 "uuid": "33f68588-89d8-453a-bb88-64deb28ed13b", 00:14:27.989 "is_configured": true, 00:14:27.989 "data_offset": 0, 00:14:27.989 "data_size": 65536 00:14:27.989 } 00:14:27.989 ] 00:14:27.989 }' 00:14:27.989 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.989 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.248 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.248 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:28.248 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.248 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.248 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.517 [2024-11-18 23:09:47.630214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.517 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.518 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.518 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.518 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.518 "name": "Existed_Raid", 00:14:28.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.518 "strip_size_kb": 64, 00:14:28.518 "state": "configuring", 00:14:28.518 "raid_level": "raid5f", 00:14:28.518 "superblock": false, 00:14:28.518 "num_base_bdevs": 4, 00:14:28.518 "num_base_bdevs_discovered": 3, 00:14:28.518 "num_base_bdevs_operational": 4, 00:14:28.518 "base_bdevs_list": [ 00:14:28.518 { 00:14:28.518 "name": null, 00:14:28.518 "uuid": "3959f5aa-2cf5-40d1-bb84-436bffdb9c06", 00:14:28.518 "is_configured": false, 00:14:28.518 "data_offset": 0, 00:14:28.518 "data_size": 65536 00:14:28.518 }, 00:14:28.518 { 00:14:28.518 "name": "BaseBdev2", 00:14:28.518 "uuid": "61787d9f-a770-45f0-bcb2-31bc1e519ac0", 00:14:28.518 "is_configured": true, 00:14:28.518 "data_offset": 0, 00:14:28.518 "data_size": 65536 00:14:28.518 }, 00:14:28.518 { 00:14:28.518 "name": "BaseBdev3", 00:14:28.518 "uuid": "3ba3c056-6f18-4d29-943c-e6c8f42cc408", 00:14:28.518 "is_configured": true, 00:14:28.518 "data_offset": 0, 00:14:28.518 "data_size": 65536 00:14:28.518 }, 00:14:28.518 { 00:14:28.519 "name": "BaseBdev4", 00:14:28.519 "uuid": "33f68588-89d8-453a-bb88-64deb28ed13b", 00:14:28.519 "is_configured": true, 00:14:28.519 "data_offset": 0, 00:14:28.519 "data_size": 65536 00:14:28.519 } 00:14:28.519 ] 00:14:28.519 }' 00:14:28.519 23:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.519 23:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.784 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.784 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:28.784 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.784 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.784 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.784 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:28.784 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.785 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:28.785 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.785 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.785 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.045 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3959f5aa-2cf5-40d1-bb84-436bffdb9c06 00:14:29.045 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.045 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.045 [2024-11-18 23:09:48.176238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:29.045 [2024-11-18 23:09:48.176302] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:29.045 [2024-11-18 23:09:48.176311] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:29.045 [2024-11-18 23:09:48.176544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:29.045 [2024-11-18 23:09:48.177008] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:29.045 [2024-11-18 23:09:48.177031] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:29.045 [2024-11-18 23:09:48.177185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.045 NewBaseBdev 00:14:29.045 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.045 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:29.045 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.046 [ 00:14:29.046 { 00:14:29.046 "name": "NewBaseBdev", 00:14:29.046 "aliases": [ 00:14:29.046 "3959f5aa-2cf5-40d1-bb84-436bffdb9c06" 00:14:29.046 ], 00:14:29.046 "product_name": "Malloc disk", 00:14:29.046 "block_size": 512, 00:14:29.046 "num_blocks": 65536, 00:14:29.046 "uuid": "3959f5aa-2cf5-40d1-bb84-436bffdb9c06", 00:14:29.046 "assigned_rate_limits": { 00:14:29.046 "rw_ios_per_sec": 0, 00:14:29.046 "rw_mbytes_per_sec": 0, 00:14:29.046 "r_mbytes_per_sec": 0, 00:14:29.046 "w_mbytes_per_sec": 0 00:14:29.046 }, 00:14:29.046 "claimed": true, 00:14:29.046 "claim_type": "exclusive_write", 00:14:29.046 "zoned": false, 00:14:29.046 "supported_io_types": { 00:14:29.046 "read": true, 00:14:29.046 "write": true, 00:14:29.046 "unmap": true, 00:14:29.046 "flush": true, 00:14:29.046 "reset": true, 00:14:29.046 "nvme_admin": false, 00:14:29.046 "nvme_io": false, 00:14:29.046 "nvme_io_md": false, 00:14:29.046 "write_zeroes": true, 00:14:29.046 "zcopy": true, 00:14:29.046 "get_zone_info": false, 00:14:29.046 "zone_management": false, 00:14:29.046 "zone_append": false, 00:14:29.046 "compare": false, 00:14:29.046 "compare_and_write": false, 00:14:29.046 "abort": true, 00:14:29.046 "seek_hole": false, 00:14:29.046 "seek_data": false, 00:14:29.046 "copy": true, 00:14:29.046 "nvme_iov_md": false 00:14:29.046 }, 00:14:29.046 "memory_domains": [ 00:14:29.046 { 00:14:29.046 "dma_device_id": "system", 00:14:29.046 "dma_device_type": 1 00:14:29.046 }, 00:14:29.046 { 00:14:29.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.046 "dma_device_type": 2 00:14:29.046 } 00:14:29.046 ], 00:14:29.046 "driver_specific": {} 00:14:29.046 } 00:14:29.046 ] 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.046 "name": "Existed_Raid", 00:14:29.046 "uuid": "f6c474ce-89f3-49b9-b90c-2942cc82be9c", 00:14:29.046 "strip_size_kb": 64, 00:14:29.046 "state": "online", 00:14:29.046 "raid_level": "raid5f", 00:14:29.046 "superblock": false, 00:14:29.046 "num_base_bdevs": 4, 00:14:29.046 "num_base_bdevs_discovered": 4, 00:14:29.046 "num_base_bdevs_operational": 4, 00:14:29.046 "base_bdevs_list": [ 00:14:29.046 { 00:14:29.046 "name": "NewBaseBdev", 00:14:29.046 "uuid": "3959f5aa-2cf5-40d1-bb84-436bffdb9c06", 00:14:29.046 "is_configured": true, 00:14:29.046 "data_offset": 0, 00:14:29.046 "data_size": 65536 00:14:29.046 }, 00:14:29.046 { 00:14:29.046 "name": "BaseBdev2", 00:14:29.046 "uuid": "61787d9f-a770-45f0-bcb2-31bc1e519ac0", 00:14:29.046 "is_configured": true, 00:14:29.046 "data_offset": 0, 00:14:29.046 "data_size": 65536 00:14:29.046 }, 00:14:29.046 { 00:14:29.046 "name": "BaseBdev3", 00:14:29.046 "uuid": "3ba3c056-6f18-4d29-943c-e6c8f42cc408", 00:14:29.046 "is_configured": true, 00:14:29.046 "data_offset": 0, 00:14:29.046 "data_size": 65536 00:14:29.046 }, 00:14:29.046 { 00:14:29.046 "name": "BaseBdev4", 00:14:29.046 "uuid": "33f68588-89d8-453a-bb88-64deb28ed13b", 00:14:29.046 "is_configured": true, 00:14:29.046 "data_offset": 0, 00:14:29.046 "data_size": 65536 00:14:29.046 } 00:14:29.046 ] 00:14:29.046 }' 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.046 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.305 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:29.305 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:29.305 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.305 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.305 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.305 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.305 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:29.305 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.305 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.305 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.305 [2024-11-18 23:09:48.663601] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.305 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.564 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.564 "name": "Existed_Raid", 00:14:29.564 "aliases": [ 00:14:29.564 "f6c474ce-89f3-49b9-b90c-2942cc82be9c" 00:14:29.564 ], 00:14:29.564 "product_name": "Raid Volume", 00:14:29.564 "block_size": 512, 00:14:29.564 "num_blocks": 196608, 00:14:29.564 "uuid": "f6c474ce-89f3-49b9-b90c-2942cc82be9c", 00:14:29.564 "assigned_rate_limits": { 00:14:29.564 "rw_ios_per_sec": 0, 00:14:29.564 "rw_mbytes_per_sec": 0, 00:14:29.564 "r_mbytes_per_sec": 0, 00:14:29.564 "w_mbytes_per_sec": 0 00:14:29.564 }, 00:14:29.564 "claimed": false, 00:14:29.564 "zoned": false, 00:14:29.564 "supported_io_types": { 00:14:29.564 "read": true, 00:14:29.564 "write": true, 00:14:29.564 "unmap": false, 00:14:29.564 "flush": false, 00:14:29.564 "reset": true, 00:14:29.564 "nvme_admin": false, 00:14:29.564 "nvme_io": false, 00:14:29.564 "nvme_io_md": false, 00:14:29.564 "write_zeroes": true, 00:14:29.564 "zcopy": false, 00:14:29.564 "get_zone_info": false, 00:14:29.564 "zone_management": false, 00:14:29.564 "zone_append": false, 00:14:29.564 "compare": false, 00:14:29.564 "compare_and_write": false, 00:14:29.564 "abort": false, 00:14:29.564 "seek_hole": false, 00:14:29.564 "seek_data": false, 00:14:29.564 "copy": false, 00:14:29.564 "nvme_iov_md": false 00:14:29.564 }, 00:14:29.564 "driver_specific": { 00:14:29.564 "raid": { 00:14:29.564 "uuid": "f6c474ce-89f3-49b9-b90c-2942cc82be9c", 00:14:29.564 "strip_size_kb": 64, 00:14:29.564 "state": "online", 00:14:29.564 "raid_level": "raid5f", 00:14:29.564 "superblock": false, 00:14:29.564 "num_base_bdevs": 4, 00:14:29.564 "num_base_bdevs_discovered": 4, 00:14:29.564 "num_base_bdevs_operational": 4, 00:14:29.564 "base_bdevs_list": [ 00:14:29.564 { 00:14:29.564 "name": "NewBaseBdev", 00:14:29.564 "uuid": "3959f5aa-2cf5-40d1-bb84-436bffdb9c06", 00:14:29.564 "is_configured": true, 00:14:29.564 "data_offset": 0, 00:14:29.564 "data_size": 65536 00:14:29.564 }, 00:14:29.564 { 00:14:29.564 "name": "BaseBdev2", 00:14:29.564 "uuid": "61787d9f-a770-45f0-bcb2-31bc1e519ac0", 00:14:29.564 "is_configured": true, 00:14:29.564 "data_offset": 0, 00:14:29.564 "data_size": 65536 00:14:29.564 }, 00:14:29.564 { 00:14:29.564 "name": "BaseBdev3", 00:14:29.564 "uuid": "3ba3c056-6f18-4d29-943c-e6c8f42cc408", 00:14:29.564 "is_configured": true, 00:14:29.564 "data_offset": 0, 00:14:29.564 "data_size": 65536 00:14:29.564 }, 00:14:29.564 { 00:14:29.564 "name": "BaseBdev4", 00:14:29.564 "uuid": "33f68588-89d8-453a-bb88-64deb28ed13b", 00:14:29.565 "is_configured": true, 00:14:29.565 "data_offset": 0, 00:14:29.565 "data_size": 65536 00:14:29.565 } 00:14:29.565 ] 00:14:29.565 } 00:14:29.565 } 00:14:29.565 }' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:29.565 BaseBdev2 00:14:29.565 BaseBdev3 00:14:29.565 BaseBdev4' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.565 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.825 [2024-11-18 23:09:48.958995] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:29.825 [2024-11-18 23:09:48.959023] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.825 [2024-11-18 23:09:48.959081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.825 [2024-11-18 23:09:48.959359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.825 [2024-11-18 23:09:48.959373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93205 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93205 ']' 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93205 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.825 23:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93205 00:14:29.825 killing process with pid 93205 00:14:29.825 23:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:29.825 23:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:29.825 23:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93205' 00:14:29.825 23:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93205 00:14:29.825 [2024-11-18 23:09:49.008640] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.825 23:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93205 00:14:29.825 [2024-11-18 23:09:49.047892] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:30.090 00:14:30.090 real 0m9.661s 00:14:30.090 user 0m16.424s 00:14:30.090 sys 0m2.116s 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.090 ************************************ 00:14:30.090 END TEST raid5f_state_function_test 00:14:30.090 ************************************ 00:14:30.090 23:09:49 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:30.090 23:09:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:30.090 23:09:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.090 23:09:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:30.090 ************************************ 00:14:30.090 START TEST raid5f_state_function_test_sb 00:14:30.090 ************************************ 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93860 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:30.090 Process raid pid: 93860 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93860' 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93860 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93860 ']' 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:30.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:30.090 23:09:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.351 [2024-11-18 23:09:49.479897] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:30.351 [2024-11-18 23:09:49.480041] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.351 [2024-11-18 23:09:49.643005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.351 [2024-11-18 23:09:49.690146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.610 [2024-11-18 23:09:49.733318] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.610 [2024-11-18 23:09:49.733357] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.187 [2024-11-18 23:09:50.318617] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.187 [2024-11-18 23:09:50.318669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.187 [2024-11-18 23:09:50.318680] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.187 [2024-11-18 23:09:50.318689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.187 [2024-11-18 23:09:50.318695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.187 [2024-11-18 23:09:50.318706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.187 [2024-11-18 23:09:50.318711] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:31.187 [2024-11-18 23:09:50.318722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.187 "name": "Existed_Raid", 00:14:31.187 "uuid": "257b1040-3fdd-4980-bded-2caa020fdf30", 00:14:31.187 "strip_size_kb": 64, 00:14:31.187 "state": "configuring", 00:14:31.187 "raid_level": "raid5f", 00:14:31.187 "superblock": true, 00:14:31.187 "num_base_bdevs": 4, 00:14:31.187 "num_base_bdevs_discovered": 0, 00:14:31.187 "num_base_bdevs_operational": 4, 00:14:31.187 "base_bdevs_list": [ 00:14:31.187 { 00:14:31.187 "name": "BaseBdev1", 00:14:31.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.187 "is_configured": false, 00:14:31.187 "data_offset": 0, 00:14:31.187 "data_size": 0 00:14:31.187 }, 00:14:31.187 { 00:14:31.187 "name": "BaseBdev2", 00:14:31.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.187 "is_configured": false, 00:14:31.187 "data_offset": 0, 00:14:31.187 "data_size": 0 00:14:31.187 }, 00:14:31.187 { 00:14:31.187 "name": "BaseBdev3", 00:14:31.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.187 "is_configured": false, 00:14:31.187 "data_offset": 0, 00:14:31.187 "data_size": 0 00:14:31.187 }, 00:14:31.187 { 00:14:31.187 "name": "BaseBdev4", 00:14:31.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.187 "is_configured": false, 00:14:31.187 "data_offset": 0, 00:14:31.187 "data_size": 0 00:14:31.187 } 00:14:31.187 ] 00:14:31.187 }' 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.187 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.445 [2024-11-18 23:09:50.745776] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.445 [2024-11-18 23:09:50.745814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.445 [2024-11-18 23:09:50.757805] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.445 [2024-11-18 23:09:50.757844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.445 [2024-11-18 23:09:50.757852] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.445 [2024-11-18 23:09:50.757861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.445 [2024-11-18 23:09:50.757867] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.445 [2024-11-18 23:09:50.757877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.445 [2024-11-18 23:09:50.757883] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:31.445 [2024-11-18 23:09:50.757891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.445 [2024-11-18 23:09:50.778544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.445 BaseBdev1 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.445 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.445 [ 00:14:31.445 { 00:14:31.445 "name": "BaseBdev1", 00:14:31.445 "aliases": [ 00:14:31.445 "aa6a7a7b-93ca-49de-ab7c-eb0c2655fa44" 00:14:31.445 ], 00:14:31.445 "product_name": "Malloc disk", 00:14:31.445 "block_size": 512, 00:14:31.445 "num_blocks": 65536, 00:14:31.445 "uuid": "aa6a7a7b-93ca-49de-ab7c-eb0c2655fa44", 00:14:31.446 "assigned_rate_limits": { 00:14:31.446 "rw_ios_per_sec": 0, 00:14:31.446 "rw_mbytes_per_sec": 0, 00:14:31.446 "r_mbytes_per_sec": 0, 00:14:31.446 "w_mbytes_per_sec": 0 00:14:31.446 }, 00:14:31.446 "claimed": true, 00:14:31.446 "claim_type": "exclusive_write", 00:14:31.446 "zoned": false, 00:14:31.446 "supported_io_types": { 00:14:31.446 "read": true, 00:14:31.446 "write": true, 00:14:31.446 "unmap": true, 00:14:31.446 "flush": true, 00:14:31.446 "reset": true, 00:14:31.446 "nvme_admin": false, 00:14:31.446 "nvme_io": false, 00:14:31.446 "nvme_io_md": false, 00:14:31.446 "write_zeroes": true, 00:14:31.446 "zcopy": true, 00:14:31.446 "get_zone_info": false, 00:14:31.446 "zone_management": false, 00:14:31.446 "zone_append": false, 00:14:31.446 "compare": false, 00:14:31.446 "compare_and_write": false, 00:14:31.446 "abort": true, 00:14:31.446 "seek_hole": false, 00:14:31.446 "seek_data": false, 00:14:31.446 "copy": true, 00:14:31.446 "nvme_iov_md": false 00:14:31.446 }, 00:14:31.446 "memory_domains": [ 00:14:31.446 { 00:14:31.446 "dma_device_id": "system", 00:14:31.446 "dma_device_type": 1 00:14:31.446 }, 00:14:31.446 { 00:14:31.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.446 "dma_device_type": 2 00:14:31.446 } 00:14:31.446 ], 00:14:31.446 "driver_specific": {} 00:14:31.446 } 00:14:31.446 ] 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.446 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.705 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.705 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.705 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.705 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.705 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.705 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.705 "name": "Existed_Raid", 00:14:31.705 "uuid": "d00a953d-2111-4060-ab74-cc535a584f82", 00:14:31.705 "strip_size_kb": 64, 00:14:31.705 "state": "configuring", 00:14:31.705 "raid_level": "raid5f", 00:14:31.705 "superblock": true, 00:14:31.705 "num_base_bdevs": 4, 00:14:31.705 "num_base_bdevs_discovered": 1, 00:14:31.705 "num_base_bdevs_operational": 4, 00:14:31.705 "base_bdevs_list": [ 00:14:31.705 { 00:14:31.705 "name": "BaseBdev1", 00:14:31.705 "uuid": "aa6a7a7b-93ca-49de-ab7c-eb0c2655fa44", 00:14:31.705 "is_configured": true, 00:14:31.705 "data_offset": 2048, 00:14:31.705 "data_size": 63488 00:14:31.705 }, 00:14:31.705 { 00:14:31.705 "name": "BaseBdev2", 00:14:31.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.705 "is_configured": false, 00:14:31.705 "data_offset": 0, 00:14:31.705 "data_size": 0 00:14:31.705 }, 00:14:31.705 { 00:14:31.705 "name": "BaseBdev3", 00:14:31.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.705 "is_configured": false, 00:14:31.705 "data_offset": 0, 00:14:31.705 "data_size": 0 00:14:31.705 }, 00:14:31.705 { 00:14:31.705 "name": "BaseBdev4", 00:14:31.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.705 "is_configured": false, 00:14:31.705 "data_offset": 0, 00:14:31.705 "data_size": 0 00:14:31.705 } 00:14:31.705 ] 00:14:31.705 }' 00:14:31.705 23:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.705 23:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.966 [2024-11-18 23:09:51.245765] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.966 [2024-11-18 23:09:51.245809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.966 [2024-11-18 23:09:51.257777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.966 [2024-11-18 23:09:51.259591] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.966 [2024-11-18 23:09:51.259630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.966 [2024-11-18 23:09:51.259639] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.966 [2024-11-18 23:09:51.259647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.966 [2024-11-18 23:09:51.259653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:31.966 [2024-11-18 23:09:51.259661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.966 "name": "Existed_Raid", 00:14:31.966 "uuid": "f2975575-0b29-48c6-bb1e-da87a06fc0dd", 00:14:31.966 "strip_size_kb": 64, 00:14:31.966 "state": "configuring", 00:14:31.966 "raid_level": "raid5f", 00:14:31.966 "superblock": true, 00:14:31.966 "num_base_bdevs": 4, 00:14:31.966 "num_base_bdevs_discovered": 1, 00:14:31.966 "num_base_bdevs_operational": 4, 00:14:31.966 "base_bdevs_list": [ 00:14:31.966 { 00:14:31.966 "name": "BaseBdev1", 00:14:31.966 "uuid": "aa6a7a7b-93ca-49de-ab7c-eb0c2655fa44", 00:14:31.966 "is_configured": true, 00:14:31.966 "data_offset": 2048, 00:14:31.966 "data_size": 63488 00:14:31.966 }, 00:14:31.966 { 00:14:31.966 "name": "BaseBdev2", 00:14:31.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.966 "is_configured": false, 00:14:31.966 "data_offset": 0, 00:14:31.966 "data_size": 0 00:14:31.966 }, 00:14:31.966 { 00:14:31.966 "name": "BaseBdev3", 00:14:31.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.966 "is_configured": false, 00:14:31.966 "data_offset": 0, 00:14:31.966 "data_size": 0 00:14:31.966 }, 00:14:31.966 { 00:14:31.966 "name": "BaseBdev4", 00:14:31.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.966 "is_configured": false, 00:14:31.966 "data_offset": 0, 00:14:31.966 "data_size": 0 00:14:31.966 } 00:14:31.966 ] 00:14:31.966 }' 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.966 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.536 [2024-11-18 23:09:51.751428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.536 BaseBdev2 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.536 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.536 [ 00:14:32.536 { 00:14:32.536 "name": "BaseBdev2", 00:14:32.536 "aliases": [ 00:14:32.536 "cd5a6252-3631-46c9-8367-373112b2b8c9" 00:14:32.536 ], 00:14:32.536 "product_name": "Malloc disk", 00:14:32.536 "block_size": 512, 00:14:32.536 "num_blocks": 65536, 00:14:32.536 "uuid": "cd5a6252-3631-46c9-8367-373112b2b8c9", 00:14:32.536 "assigned_rate_limits": { 00:14:32.536 "rw_ios_per_sec": 0, 00:14:32.537 "rw_mbytes_per_sec": 0, 00:14:32.537 "r_mbytes_per_sec": 0, 00:14:32.537 "w_mbytes_per_sec": 0 00:14:32.537 }, 00:14:32.537 "claimed": true, 00:14:32.537 "claim_type": "exclusive_write", 00:14:32.537 "zoned": false, 00:14:32.537 "supported_io_types": { 00:14:32.537 "read": true, 00:14:32.537 "write": true, 00:14:32.537 "unmap": true, 00:14:32.537 "flush": true, 00:14:32.537 "reset": true, 00:14:32.537 "nvme_admin": false, 00:14:32.537 "nvme_io": false, 00:14:32.537 "nvme_io_md": false, 00:14:32.537 "write_zeroes": true, 00:14:32.537 "zcopy": true, 00:14:32.537 "get_zone_info": false, 00:14:32.537 "zone_management": false, 00:14:32.537 "zone_append": false, 00:14:32.537 "compare": false, 00:14:32.537 "compare_and_write": false, 00:14:32.537 "abort": true, 00:14:32.537 "seek_hole": false, 00:14:32.537 "seek_data": false, 00:14:32.537 "copy": true, 00:14:32.537 "nvme_iov_md": false 00:14:32.537 }, 00:14:32.537 "memory_domains": [ 00:14:32.537 { 00:14:32.537 "dma_device_id": "system", 00:14:32.537 "dma_device_type": 1 00:14:32.537 }, 00:14:32.537 { 00:14:32.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.537 "dma_device_type": 2 00:14:32.537 } 00:14:32.537 ], 00:14:32.537 "driver_specific": {} 00:14:32.537 } 00:14:32.537 ] 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.537 "name": "Existed_Raid", 00:14:32.537 "uuid": "f2975575-0b29-48c6-bb1e-da87a06fc0dd", 00:14:32.537 "strip_size_kb": 64, 00:14:32.537 "state": "configuring", 00:14:32.537 "raid_level": "raid5f", 00:14:32.537 "superblock": true, 00:14:32.537 "num_base_bdevs": 4, 00:14:32.537 "num_base_bdevs_discovered": 2, 00:14:32.537 "num_base_bdevs_operational": 4, 00:14:32.537 "base_bdevs_list": [ 00:14:32.537 { 00:14:32.537 "name": "BaseBdev1", 00:14:32.537 "uuid": "aa6a7a7b-93ca-49de-ab7c-eb0c2655fa44", 00:14:32.537 "is_configured": true, 00:14:32.537 "data_offset": 2048, 00:14:32.537 "data_size": 63488 00:14:32.537 }, 00:14:32.537 { 00:14:32.537 "name": "BaseBdev2", 00:14:32.537 "uuid": "cd5a6252-3631-46c9-8367-373112b2b8c9", 00:14:32.537 "is_configured": true, 00:14:32.537 "data_offset": 2048, 00:14:32.537 "data_size": 63488 00:14:32.537 }, 00:14:32.537 { 00:14:32.537 "name": "BaseBdev3", 00:14:32.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.537 "is_configured": false, 00:14:32.537 "data_offset": 0, 00:14:32.537 "data_size": 0 00:14:32.537 }, 00:14:32.537 { 00:14:32.537 "name": "BaseBdev4", 00:14:32.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.537 "is_configured": false, 00:14:32.537 "data_offset": 0, 00:14:32.537 "data_size": 0 00:14:32.537 } 00:14:32.537 ] 00:14:32.537 }' 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.537 23:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.107 [2024-11-18 23:09:52.277317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.107 BaseBdev3 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.107 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.107 [ 00:14:33.108 { 00:14:33.108 "name": "BaseBdev3", 00:14:33.108 "aliases": [ 00:14:33.108 "681ffe4e-5f70-44bb-8047-b21768011aa3" 00:14:33.108 ], 00:14:33.108 "product_name": "Malloc disk", 00:14:33.108 "block_size": 512, 00:14:33.108 "num_blocks": 65536, 00:14:33.108 "uuid": "681ffe4e-5f70-44bb-8047-b21768011aa3", 00:14:33.108 "assigned_rate_limits": { 00:14:33.108 "rw_ios_per_sec": 0, 00:14:33.108 "rw_mbytes_per_sec": 0, 00:14:33.108 "r_mbytes_per_sec": 0, 00:14:33.108 "w_mbytes_per_sec": 0 00:14:33.108 }, 00:14:33.108 "claimed": true, 00:14:33.108 "claim_type": "exclusive_write", 00:14:33.108 "zoned": false, 00:14:33.108 "supported_io_types": { 00:14:33.108 "read": true, 00:14:33.108 "write": true, 00:14:33.108 "unmap": true, 00:14:33.108 "flush": true, 00:14:33.108 "reset": true, 00:14:33.108 "nvme_admin": false, 00:14:33.108 "nvme_io": false, 00:14:33.108 "nvme_io_md": false, 00:14:33.108 "write_zeroes": true, 00:14:33.108 "zcopy": true, 00:14:33.108 "get_zone_info": false, 00:14:33.108 "zone_management": false, 00:14:33.108 "zone_append": false, 00:14:33.108 "compare": false, 00:14:33.108 "compare_and_write": false, 00:14:33.108 "abort": true, 00:14:33.108 "seek_hole": false, 00:14:33.108 "seek_data": false, 00:14:33.108 "copy": true, 00:14:33.108 "nvme_iov_md": false 00:14:33.108 }, 00:14:33.108 "memory_domains": [ 00:14:33.108 { 00:14:33.108 "dma_device_id": "system", 00:14:33.108 "dma_device_type": 1 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.108 "dma_device_type": 2 00:14:33.108 } 00:14:33.108 ], 00:14:33.108 "driver_specific": {} 00:14:33.108 } 00:14:33.108 ] 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.108 "name": "Existed_Raid", 00:14:33.108 "uuid": "f2975575-0b29-48c6-bb1e-da87a06fc0dd", 00:14:33.108 "strip_size_kb": 64, 00:14:33.108 "state": "configuring", 00:14:33.108 "raid_level": "raid5f", 00:14:33.108 "superblock": true, 00:14:33.108 "num_base_bdevs": 4, 00:14:33.108 "num_base_bdevs_discovered": 3, 00:14:33.108 "num_base_bdevs_operational": 4, 00:14:33.108 "base_bdevs_list": [ 00:14:33.108 { 00:14:33.108 "name": "BaseBdev1", 00:14:33.108 "uuid": "aa6a7a7b-93ca-49de-ab7c-eb0c2655fa44", 00:14:33.108 "is_configured": true, 00:14:33.108 "data_offset": 2048, 00:14:33.108 "data_size": 63488 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "name": "BaseBdev2", 00:14:33.108 "uuid": "cd5a6252-3631-46c9-8367-373112b2b8c9", 00:14:33.108 "is_configured": true, 00:14:33.108 "data_offset": 2048, 00:14:33.108 "data_size": 63488 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "name": "BaseBdev3", 00:14:33.108 "uuid": "681ffe4e-5f70-44bb-8047-b21768011aa3", 00:14:33.108 "is_configured": true, 00:14:33.108 "data_offset": 2048, 00:14:33.108 "data_size": 63488 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "name": "BaseBdev4", 00:14:33.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.108 "is_configured": false, 00:14:33.108 "data_offset": 0, 00:14:33.108 "data_size": 0 00:14:33.108 } 00:14:33.108 ] 00:14:33.108 }' 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.108 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.367 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:33.367 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.367 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.627 [2024-11-18 23:09:52.747483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:33.627 [2024-11-18 23:09:52.747682] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:33.627 [2024-11-18 23:09:52.747696] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:33.627 [2024-11-18 23:09:52.748041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:33.627 BaseBdev4 00:14:33.627 [2024-11-18 23:09:52.748526] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:33.627 [2024-11-18 23:09:52.748549] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:33.627 [2024-11-18 23:09:52.748661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.627 [ 00:14:33.627 { 00:14:33.627 "name": "BaseBdev4", 00:14:33.627 "aliases": [ 00:14:33.627 "af33fa10-30a2-4827-b266-8f4981f3983b" 00:14:33.627 ], 00:14:33.627 "product_name": "Malloc disk", 00:14:33.627 "block_size": 512, 00:14:33.627 "num_blocks": 65536, 00:14:33.627 "uuid": "af33fa10-30a2-4827-b266-8f4981f3983b", 00:14:33.627 "assigned_rate_limits": { 00:14:33.627 "rw_ios_per_sec": 0, 00:14:33.627 "rw_mbytes_per_sec": 0, 00:14:33.627 "r_mbytes_per_sec": 0, 00:14:33.627 "w_mbytes_per_sec": 0 00:14:33.627 }, 00:14:33.627 "claimed": true, 00:14:33.627 "claim_type": "exclusive_write", 00:14:33.627 "zoned": false, 00:14:33.627 "supported_io_types": { 00:14:33.627 "read": true, 00:14:33.627 "write": true, 00:14:33.627 "unmap": true, 00:14:33.627 "flush": true, 00:14:33.627 "reset": true, 00:14:33.627 "nvme_admin": false, 00:14:33.627 "nvme_io": false, 00:14:33.627 "nvme_io_md": false, 00:14:33.627 "write_zeroes": true, 00:14:33.627 "zcopy": true, 00:14:33.627 "get_zone_info": false, 00:14:33.627 "zone_management": false, 00:14:33.627 "zone_append": false, 00:14:33.627 "compare": false, 00:14:33.627 "compare_and_write": false, 00:14:33.627 "abort": true, 00:14:33.627 "seek_hole": false, 00:14:33.627 "seek_data": false, 00:14:33.627 "copy": true, 00:14:33.627 "nvme_iov_md": false 00:14:33.627 }, 00:14:33.627 "memory_domains": [ 00:14:33.627 { 00:14:33.627 "dma_device_id": "system", 00:14:33.627 "dma_device_type": 1 00:14:33.627 }, 00:14:33.627 { 00:14:33.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.627 "dma_device_type": 2 00:14:33.627 } 00:14:33.627 ], 00:14:33.627 "driver_specific": {} 00:14:33.627 } 00:14:33.627 ] 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.627 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.628 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.628 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.628 "name": "Existed_Raid", 00:14:33.628 "uuid": "f2975575-0b29-48c6-bb1e-da87a06fc0dd", 00:14:33.628 "strip_size_kb": 64, 00:14:33.628 "state": "online", 00:14:33.628 "raid_level": "raid5f", 00:14:33.628 "superblock": true, 00:14:33.628 "num_base_bdevs": 4, 00:14:33.628 "num_base_bdevs_discovered": 4, 00:14:33.628 "num_base_bdevs_operational": 4, 00:14:33.628 "base_bdevs_list": [ 00:14:33.628 { 00:14:33.628 "name": "BaseBdev1", 00:14:33.628 "uuid": "aa6a7a7b-93ca-49de-ab7c-eb0c2655fa44", 00:14:33.628 "is_configured": true, 00:14:33.628 "data_offset": 2048, 00:14:33.628 "data_size": 63488 00:14:33.628 }, 00:14:33.628 { 00:14:33.628 "name": "BaseBdev2", 00:14:33.628 "uuid": "cd5a6252-3631-46c9-8367-373112b2b8c9", 00:14:33.628 "is_configured": true, 00:14:33.628 "data_offset": 2048, 00:14:33.628 "data_size": 63488 00:14:33.628 }, 00:14:33.628 { 00:14:33.628 "name": "BaseBdev3", 00:14:33.628 "uuid": "681ffe4e-5f70-44bb-8047-b21768011aa3", 00:14:33.628 "is_configured": true, 00:14:33.628 "data_offset": 2048, 00:14:33.628 "data_size": 63488 00:14:33.628 }, 00:14:33.628 { 00:14:33.628 "name": "BaseBdev4", 00:14:33.628 "uuid": "af33fa10-30a2-4827-b266-8f4981f3983b", 00:14:33.628 "is_configured": true, 00:14:33.628 "data_offset": 2048, 00:14:33.628 "data_size": 63488 00:14:33.628 } 00:14:33.628 ] 00:14:33.628 }' 00:14:33.628 23:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.628 23:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.888 [2024-11-18 23:09:53.119158] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:33.888 "name": "Existed_Raid", 00:14:33.888 "aliases": [ 00:14:33.888 "f2975575-0b29-48c6-bb1e-da87a06fc0dd" 00:14:33.888 ], 00:14:33.888 "product_name": "Raid Volume", 00:14:33.888 "block_size": 512, 00:14:33.888 "num_blocks": 190464, 00:14:33.888 "uuid": "f2975575-0b29-48c6-bb1e-da87a06fc0dd", 00:14:33.888 "assigned_rate_limits": { 00:14:33.888 "rw_ios_per_sec": 0, 00:14:33.888 "rw_mbytes_per_sec": 0, 00:14:33.888 "r_mbytes_per_sec": 0, 00:14:33.888 "w_mbytes_per_sec": 0 00:14:33.888 }, 00:14:33.888 "claimed": false, 00:14:33.888 "zoned": false, 00:14:33.888 "supported_io_types": { 00:14:33.888 "read": true, 00:14:33.888 "write": true, 00:14:33.888 "unmap": false, 00:14:33.888 "flush": false, 00:14:33.888 "reset": true, 00:14:33.888 "nvme_admin": false, 00:14:33.888 "nvme_io": false, 00:14:33.888 "nvme_io_md": false, 00:14:33.888 "write_zeroes": true, 00:14:33.888 "zcopy": false, 00:14:33.888 "get_zone_info": false, 00:14:33.888 "zone_management": false, 00:14:33.888 "zone_append": false, 00:14:33.888 "compare": false, 00:14:33.888 "compare_and_write": false, 00:14:33.888 "abort": false, 00:14:33.888 "seek_hole": false, 00:14:33.888 "seek_data": false, 00:14:33.888 "copy": false, 00:14:33.888 "nvme_iov_md": false 00:14:33.888 }, 00:14:33.888 "driver_specific": { 00:14:33.888 "raid": { 00:14:33.888 "uuid": "f2975575-0b29-48c6-bb1e-da87a06fc0dd", 00:14:33.888 "strip_size_kb": 64, 00:14:33.888 "state": "online", 00:14:33.888 "raid_level": "raid5f", 00:14:33.888 "superblock": true, 00:14:33.888 "num_base_bdevs": 4, 00:14:33.888 "num_base_bdevs_discovered": 4, 00:14:33.888 "num_base_bdevs_operational": 4, 00:14:33.888 "base_bdevs_list": [ 00:14:33.888 { 00:14:33.888 "name": "BaseBdev1", 00:14:33.888 "uuid": "aa6a7a7b-93ca-49de-ab7c-eb0c2655fa44", 00:14:33.888 "is_configured": true, 00:14:33.888 "data_offset": 2048, 00:14:33.888 "data_size": 63488 00:14:33.888 }, 00:14:33.888 { 00:14:33.888 "name": "BaseBdev2", 00:14:33.888 "uuid": "cd5a6252-3631-46c9-8367-373112b2b8c9", 00:14:33.888 "is_configured": true, 00:14:33.888 "data_offset": 2048, 00:14:33.888 "data_size": 63488 00:14:33.888 }, 00:14:33.888 { 00:14:33.888 "name": "BaseBdev3", 00:14:33.888 "uuid": "681ffe4e-5f70-44bb-8047-b21768011aa3", 00:14:33.888 "is_configured": true, 00:14:33.888 "data_offset": 2048, 00:14:33.888 "data_size": 63488 00:14:33.888 }, 00:14:33.888 { 00:14:33.888 "name": "BaseBdev4", 00:14:33.888 "uuid": "af33fa10-30a2-4827-b266-8f4981f3983b", 00:14:33.888 "is_configured": true, 00:14:33.888 "data_offset": 2048, 00:14:33.888 "data_size": 63488 00:14:33.888 } 00:14:33.888 ] 00:14:33.888 } 00:14:33.888 } 00:14:33.888 }' 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:33.888 BaseBdev2 00:14:33.888 BaseBdev3 00:14:33.888 BaseBdev4' 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:33.888 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.889 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:33.889 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.889 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.889 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.889 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.150 [2024-11-18 23:09:53.446430] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.150 "name": "Existed_Raid", 00:14:34.150 "uuid": "f2975575-0b29-48c6-bb1e-da87a06fc0dd", 00:14:34.150 "strip_size_kb": 64, 00:14:34.150 "state": "online", 00:14:34.150 "raid_level": "raid5f", 00:14:34.150 "superblock": true, 00:14:34.150 "num_base_bdevs": 4, 00:14:34.150 "num_base_bdevs_discovered": 3, 00:14:34.150 "num_base_bdevs_operational": 3, 00:14:34.150 "base_bdevs_list": [ 00:14:34.150 { 00:14:34.150 "name": null, 00:14:34.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.150 "is_configured": false, 00:14:34.150 "data_offset": 0, 00:14:34.150 "data_size": 63488 00:14:34.150 }, 00:14:34.150 { 00:14:34.150 "name": "BaseBdev2", 00:14:34.150 "uuid": "cd5a6252-3631-46c9-8367-373112b2b8c9", 00:14:34.150 "is_configured": true, 00:14:34.150 "data_offset": 2048, 00:14:34.150 "data_size": 63488 00:14:34.150 }, 00:14:34.150 { 00:14:34.150 "name": "BaseBdev3", 00:14:34.150 "uuid": "681ffe4e-5f70-44bb-8047-b21768011aa3", 00:14:34.150 "is_configured": true, 00:14:34.150 "data_offset": 2048, 00:14:34.150 "data_size": 63488 00:14:34.150 }, 00:14:34.150 { 00:14:34.150 "name": "BaseBdev4", 00:14:34.150 "uuid": "af33fa10-30a2-4827-b266-8f4981f3983b", 00:14:34.150 "is_configured": true, 00:14:34.150 "data_offset": 2048, 00:14:34.150 "data_size": 63488 00:14:34.150 } 00:14:34.150 ] 00:14:34.150 }' 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.150 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.734 [2024-11-18 23:09:53.904994] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:34.734 [2024-11-18 23:09:53.905128] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.734 [2024-11-18 23:09:53.916170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:34.734 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.735 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.735 [2024-11-18 23:09:53.976083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:34.735 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.735 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:34.735 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.735 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.735 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.735 23:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.735 23:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.735 [2024-11-18 23:09:54.046456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:34.735 [2024-11-18 23:09:54.046515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:34.735 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.996 BaseBdev2 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.996 [ 00:14:34.996 { 00:14:34.996 "name": "BaseBdev2", 00:14:34.996 "aliases": [ 00:14:34.996 "81f6d106-b82e-4d5d-a588-df0c42a757d5" 00:14:34.996 ], 00:14:34.996 "product_name": "Malloc disk", 00:14:34.996 "block_size": 512, 00:14:34.996 "num_blocks": 65536, 00:14:34.996 "uuid": "81f6d106-b82e-4d5d-a588-df0c42a757d5", 00:14:34.996 "assigned_rate_limits": { 00:14:34.996 "rw_ios_per_sec": 0, 00:14:34.996 "rw_mbytes_per_sec": 0, 00:14:34.996 "r_mbytes_per_sec": 0, 00:14:34.996 "w_mbytes_per_sec": 0 00:14:34.996 }, 00:14:34.996 "claimed": false, 00:14:34.996 "zoned": false, 00:14:34.996 "supported_io_types": { 00:14:34.996 "read": true, 00:14:34.996 "write": true, 00:14:34.996 "unmap": true, 00:14:34.996 "flush": true, 00:14:34.996 "reset": true, 00:14:34.996 "nvme_admin": false, 00:14:34.996 "nvme_io": false, 00:14:34.996 "nvme_io_md": false, 00:14:34.996 "write_zeroes": true, 00:14:34.996 "zcopy": true, 00:14:34.996 "get_zone_info": false, 00:14:34.996 "zone_management": false, 00:14:34.996 "zone_append": false, 00:14:34.996 "compare": false, 00:14:34.996 "compare_and_write": false, 00:14:34.996 "abort": true, 00:14:34.996 "seek_hole": false, 00:14:34.996 "seek_data": false, 00:14:34.996 "copy": true, 00:14:34.996 "nvme_iov_md": false 00:14:34.996 }, 00:14:34.996 "memory_domains": [ 00:14:34.996 { 00:14:34.996 "dma_device_id": "system", 00:14:34.996 "dma_device_type": 1 00:14:34.996 }, 00:14:34.996 { 00:14:34.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.996 "dma_device_type": 2 00:14:34.996 } 00:14:34.996 ], 00:14:34.996 "driver_specific": {} 00:14:34.996 } 00:14:34.996 ] 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.996 BaseBdev3 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:34.996 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.997 [ 00:14:34.997 { 00:14:34.997 "name": "BaseBdev3", 00:14:34.997 "aliases": [ 00:14:34.997 "a3d8e320-f4fc-43e3-9483-a5286c748c79" 00:14:34.997 ], 00:14:34.997 "product_name": "Malloc disk", 00:14:34.997 "block_size": 512, 00:14:34.997 "num_blocks": 65536, 00:14:34.997 "uuid": "a3d8e320-f4fc-43e3-9483-a5286c748c79", 00:14:34.997 "assigned_rate_limits": { 00:14:34.997 "rw_ios_per_sec": 0, 00:14:34.997 "rw_mbytes_per_sec": 0, 00:14:34.997 "r_mbytes_per_sec": 0, 00:14:34.997 "w_mbytes_per_sec": 0 00:14:34.997 }, 00:14:34.997 "claimed": false, 00:14:34.997 "zoned": false, 00:14:34.997 "supported_io_types": { 00:14:34.997 "read": true, 00:14:34.997 "write": true, 00:14:34.997 "unmap": true, 00:14:34.997 "flush": true, 00:14:34.997 "reset": true, 00:14:34.997 "nvme_admin": false, 00:14:34.997 "nvme_io": false, 00:14:34.997 "nvme_io_md": false, 00:14:34.997 "write_zeroes": true, 00:14:34.997 "zcopy": true, 00:14:34.997 "get_zone_info": false, 00:14:34.997 "zone_management": false, 00:14:34.997 "zone_append": false, 00:14:34.997 "compare": false, 00:14:34.997 "compare_and_write": false, 00:14:34.997 "abort": true, 00:14:34.997 "seek_hole": false, 00:14:34.997 "seek_data": false, 00:14:34.997 "copy": true, 00:14:34.997 "nvme_iov_md": false 00:14:34.997 }, 00:14:34.997 "memory_domains": [ 00:14:34.997 { 00:14:34.997 "dma_device_id": "system", 00:14:34.997 "dma_device_type": 1 00:14:34.997 }, 00:14:34.997 { 00:14:34.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.997 "dma_device_type": 2 00:14:34.997 } 00:14:34.997 ], 00:14:34.997 "driver_specific": {} 00:14:34.997 } 00:14:34.997 ] 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.997 BaseBdev4 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.997 [ 00:14:34.997 { 00:14:34.997 "name": "BaseBdev4", 00:14:34.997 "aliases": [ 00:14:34.997 "04fc5f25-8b6a-4843-836e-4a4d6865eecf" 00:14:34.997 ], 00:14:34.997 "product_name": "Malloc disk", 00:14:34.997 "block_size": 512, 00:14:34.997 "num_blocks": 65536, 00:14:34.997 "uuid": "04fc5f25-8b6a-4843-836e-4a4d6865eecf", 00:14:34.997 "assigned_rate_limits": { 00:14:34.997 "rw_ios_per_sec": 0, 00:14:34.997 "rw_mbytes_per_sec": 0, 00:14:34.997 "r_mbytes_per_sec": 0, 00:14:34.997 "w_mbytes_per_sec": 0 00:14:34.997 }, 00:14:34.997 "claimed": false, 00:14:34.997 "zoned": false, 00:14:34.997 "supported_io_types": { 00:14:34.997 "read": true, 00:14:34.997 "write": true, 00:14:34.997 "unmap": true, 00:14:34.997 "flush": true, 00:14:34.997 "reset": true, 00:14:34.997 "nvme_admin": false, 00:14:34.997 "nvme_io": false, 00:14:34.997 "nvme_io_md": false, 00:14:34.997 "write_zeroes": true, 00:14:34.997 "zcopy": true, 00:14:34.997 "get_zone_info": false, 00:14:34.997 "zone_management": false, 00:14:34.997 "zone_append": false, 00:14:34.997 "compare": false, 00:14:34.997 "compare_and_write": false, 00:14:34.997 "abort": true, 00:14:34.997 "seek_hole": false, 00:14:34.997 "seek_data": false, 00:14:34.997 "copy": true, 00:14:34.997 "nvme_iov_md": false 00:14:34.997 }, 00:14:34.997 "memory_domains": [ 00:14:34.997 { 00:14:34.997 "dma_device_id": "system", 00:14:34.997 "dma_device_type": 1 00:14:34.997 }, 00:14:34.997 { 00:14:34.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.997 "dma_device_type": 2 00:14:34.997 } 00:14:34.997 ], 00:14:34.997 "driver_specific": {} 00:14:34.997 } 00:14:34.997 ] 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.997 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.997 [2024-11-18 23:09:54.273700] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.997 [2024-11-18 23:09:54.273796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.998 [2024-11-18 23:09:54.273836] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.998 [2024-11-18 23:09:54.275659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.998 [2024-11-18 23:09:54.275748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.998 "name": "Existed_Raid", 00:14:34.998 "uuid": "1d5e1561-27a2-4f96-a536-7165ab1a9ac8", 00:14:34.998 "strip_size_kb": 64, 00:14:34.998 "state": "configuring", 00:14:34.998 "raid_level": "raid5f", 00:14:34.998 "superblock": true, 00:14:34.998 "num_base_bdevs": 4, 00:14:34.998 "num_base_bdevs_discovered": 3, 00:14:34.998 "num_base_bdevs_operational": 4, 00:14:34.998 "base_bdevs_list": [ 00:14:34.998 { 00:14:34.998 "name": "BaseBdev1", 00:14:34.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.998 "is_configured": false, 00:14:34.998 "data_offset": 0, 00:14:34.998 "data_size": 0 00:14:34.998 }, 00:14:34.998 { 00:14:34.998 "name": "BaseBdev2", 00:14:34.998 "uuid": "81f6d106-b82e-4d5d-a588-df0c42a757d5", 00:14:34.998 "is_configured": true, 00:14:34.998 "data_offset": 2048, 00:14:34.998 "data_size": 63488 00:14:34.998 }, 00:14:34.998 { 00:14:34.998 "name": "BaseBdev3", 00:14:34.998 "uuid": "a3d8e320-f4fc-43e3-9483-a5286c748c79", 00:14:34.998 "is_configured": true, 00:14:34.998 "data_offset": 2048, 00:14:34.998 "data_size": 63488 00:14:34.998 }, 00:14:34.998 { 00:14:34.998 "name": "BaseBdev4", 00:14:34.998 "uuid": "04fc5f25-8b6a-4843-836e-4a4d6865eecf", 00:14:34.998 "is_configured": true, 00:14:34.998 "data_offset": 2048, 00:14:34.998 "data_size": 63488 00:14:34.998 } 00:14:34.998 ] 00:14:34.998 }' 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.998 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.568 [2024-11-18 23:09:54.744878] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.568 "name": "Existed_Raid", 00:14:35.568 "uuid": "1d5e1561-27a2-4f96-a536-7165ab1a9ac8", 00:14:35.568 "strip_size_kb": 64, 00:14:35.568 "state": "configuring", 00:14:35.568 "raid_level": "raid5f", 00:14:35.568 "superblock": true, 00:14:35.568 "num_base_bdevs": 4, 00:14:35.568 "num_base_bdevs_discovered": 2, 00:14:35.568 "num_base_bdevs_operational": 4, 00:14:35.568 "base_bdevs_list": [ 00:14:35.568 { 00:14:35.568 "name": "BaseBdev1", 00:14:35.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.568 "is_configured": false, 00:14:35.568 "data_offset": 0, 00:14:35.568 "data_size": 0 00:14:35.568 }, 00:14:35.568 { 00:14:35.568 "name": null, 00:14:35.568 "uuid": "81f6d106-b82e-4d5d-a588-df0c42a757d5", 00:14:35.568 "is_configured": false, 00:14:35.568 "data_offset": 0, 00:14:35.568 "data_size": 63488 00:14:35.568 }, 00:14:35.568 { 00:14:35.568 "name": "BaseBdev3", 00:14:35.568 "uuid": "a3d8e320-f4fc-43e3-9483-a5286c748c79", 00:14:35.568 "is_configured": true, 00:14:35.568 "data_offset": 2048, 00:14:35.568 "data_size": 63488 00:14:35.568 }, 00:14:35.568 { 00:14:35.568 "name": "BaseBdev4", 00:14:35.568 "uuid": "04fc5f25-8b6a-4843-836e-4a4d6865eecf", 00:14:35.568 "is_configured": true, 00:14:35.568 "data_offset": 2048, 00:14:35.568 "data_size": 63488 00:14:35.568 } 00:14:35.568 ] 00:14:35.568 }' 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.568 23:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.828 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.828 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:35.828 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.828 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.088 [2024-11-18 23:09:55.262995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.088 BaseBdev1 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.088 [ 00:14:36.088 { 00:14:36.088 "name": "BaseBdev1", 00:14:36.088 "aliases": [ 00:14:36.088 "fc8cccbc-fb09-4e91-be8f-7d1b72a55f39" 00:14:36.088 ], 00:14:36.088 "product_name": "Malloc disk", 00:14:36.088 "block_size": 512, 00:14:36.088 "num_blocks": 65536, 00:14:36.088 "uuid": "fc8cccbc-fb09-4e91-be8f-7d1b72a55f39", 00:14:36.088 "assigned_rate_limits": { 00:14:36.088 "rw_ios_per_sec": 0, 00:14:36.088 "rw_mbytes_per_sec": 0, 00:14:36.088 "r_mbytes_per_sec": 0, 00:14:36.088 "w_mbytes_per_sec": 0 00:14:36.088 }, 00:14:36.088 "claimed": true, 00:14:36.088 "claim_type": "exclusive_write", 00:14:36.088 "zoned": false, 00:14:36.088 "supported_io_types": { 00:14:36.088 "read": true, 00:14:36.088 "write": true, 00:14:36.088 "unmap": true, 00:14:36.088 "flush": true, 00:14:36.088 "reset": true, 00:14:36.088 "nvme_admin": false, 00:14:36.088 "nvme_io": false, 00:14:36.088 "nvme_io_md": false, 00:14:36.088 "write_zeroes": true, 00:14:36.088 "zcopy": true, 00:14:36.088 "get_zone_info": false, 00:14:36.088 "zone_management": false, 00:14:36.088 "zone_append": false, 00:14:36.088 "compare": false, 00:14:36.088 "compare_and_write": false, 00:14:36.088 "abort": true, 00:14:36.088 "seek_hole": false, 00:14:36.088 "seek_data": false, 00:14:36.088 "copy": true, 00:14:36.088 "nvme_iov_md": false 00:14:36.088 }, 00:14:36.088 "memory_domains": [ 00:14:36.088 { 00:14:36.088 "dma_device_id": "system", 00:14:36.088 "dma_device_type": 1 00:14:36.088 }, 00:14:36.088 { 00:14:36.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.088 "dma_device_type": 2 00:14:36.088 } 00:14:36.088 ], 00:14:36.088 "driver_specific": {} 00:14:36.088 } 00:14:36.088 ] 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.088 "name": "Existed_Raid", 00:14:36.088 "uuid": "1d5e1561-27a2-4f96-a536-7165ab1a9ac8", 00:14:36.088 "strip_size_kb": 64, 00:14:36.088 "state": "configuring", 00:14:36.088 "raid_level": "raid5f", 00:14:36.088 "superblock": true, 00:14:36.088 "num_base_bdevs": 4, 00:14:36.088 "num_base_bdevs_discovered": 3, 00:14:36.088 "num_base_bdevs_operational": 4, 00:14:36.088 "base_bdevs_list": [ 00:14:36.088 { 00:14:36.088 "name": "BaseBdev1", 00:14:36.088 "uuid": "fc8cccbc-fb09-4e91-be8f-7d1b72a55f39", 00:14:36.088 "is_configured": true, 00:14:36.088 "data_offset": 2048, 00:14:36.088 "data_size": 63488 00:14:36.088 }, 00:14:36.088 { 00:14:36.088 "name": null, 00:14:36.088 "uuid": "81f6d106-b82e-4d5d-a588-df0c42a757d5", 00:14:36.088 "is_configured": false, 00:14:36.088 "data_offset": 0, 00:14:36.088 "data_size": 63488 00:14:36.088 }, 00:14:36.088 { 00:14:36.088 "name": "BaseBdev3", 00:14:36.088 "uuid": "a3d8e320-f4fc-43e3-9483-a5286c748c79", 00:14:36.088 "is_configured": true, 00:14:36.088 "data_offset": 2048, 00:14:36.088 "data_size": 63488 00:14:36.088 }, 00:14:36.088 { 00:14:36.088 "name": "BaseBdev4", 00:14:36.088 "uuid": "04fc5f25-8b6a-4843-836e-4a4d6865eecf", 00:14:36.088 "is_configured": true, 00:14:36.088 "data_offset": 2048, 00:14:36.088 "data_size": 63488 00:14:36.088 } 00:14:36.088 ] 00:14:36.088 }' 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.088 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.658 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.658 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:36.658 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.658 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.658 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.658 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:36.658 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:36.658 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.659 [2024-11-18 23:09:55.786134] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.659 "name": "Existed_Raid", 00:14:36.659 "uuid": "1d5e1561-27a2-4f96-a536-7165ab1a9ac8", 00:14:36.659 "strip_size_kb": 64, 00:14:36.659 "state": "configuring", 00:14:36.659 "raid_level": "raid5f", 00:14:36.659 "superblock": true, 00:14:36.659 "num_base_bdevs": 4, 00:14:36.659 "num_base_bdevs_discovered": 2, 00:14:36.659 "num_base_bdevs_operational": 4, 00:14:36.659 "base_bdevs_list": [ 00:14:36.659 { 00:14:36.659 "name": "BaseBdev1", 00:14:36.659 "uuid": "fc8cccbc-fb09-4e91-be8f-7d1b72a55f39", 00:14:36.659 "is_configured": true, 00:14:36.659 "data_offset": 2048, 00:14:36.659 "data_size": 63488 00:14:36.659 }, 00:14:36.659 { 00:14:36.659 "name": null, 00:14:36.659 "uuid": "81f6d106-b82e-4d5d-a588-df0c42a757d5", 00:14:36.659 "is_configured": false, 00:14:36.659 "data_offset": 0, 00:14:36.659 "data_size": 63488 00:14:36.659 }, 00:14:36.659 { 00:14:36.659 "name": null, 00:14:36.659 "uuid": "a3d8e320-f4fc-43e3-9483-a5286c748c79", 00:14:36.659 "is_configured": false, 00:14:36.659 "data_offset": 0, 00:14:36.659 "data_size": 63488 00:14:36.659 }, 00:14:36.659 { 00:14:36.659 "name": "BaseBdev4", 00:14:36.659 "uuid": "04fc5f25-8b6a-4843-836e-4a4d6865eecf", 00:14:36.659 "is_configured": true, 00:14:36.659 "data_offset": 2048, 00:14:36.659 "data_size": 63488 00:14:36.659 } 00:14:36.659 ] 00:14:36.659 }' 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.659 23:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.919 [2024-11-18 23:09:56.273347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.919 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.179 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.179 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.179 "name": "Existed_Raid", 00:14:37.179 "uuid": "1d5e1561-27a2-4f96-a536-7165ab1a9ac8", 00:14:37.179 "strip_size_kb": 64, 00:14:37.179 "state": "configuring", 00:14:37.179 "raid_level": "raid5f", 00:14:37.179 "superblock": true, 00:14:37.179 "num_base_bdevs": 4, 00:14:37.179 "num_base_bdevs_discovered": 3, 00:14:37.179 "num_base_bdevs_operational": 4, 00:14:37.179 "base_bdevs_list": [ 00:14:37.179 { 00:14:37.179 "name": "BaseBdev1", 00:14:37.179 "uuid": "fc8cccbc-fb09-4e91-be8f-7d1b72a55f39", 00:14:37.179 "is_configured": true, 00:14:37.179 "data_offset": 2048, 00:14:37.179 "data_size": 63488 00:14:37.179 }, 00:14:37.179 { 00:14:37.179 "name": null, 00:14:37.179 "uuid": "81f6d106-b82e-4d5d-a588-df0c42a757d5", 00:14:37.179 "is_configured": false, 00:14:37.179 "data_offset": 0, 00:14:37.179 "data_size": 63488 00:14:37.179 }, 00:14:37.179 { 00:14:37.179 "name": "BaseBdev3", 00:14:37.179 "uuid": "a3d8e320-f4fc-43e3-9483-a5286c748c79", 00:14:37.179 "is_configured": true, 00:14:37.179 "data_offset": 2048, 00:14:37.179 "data_size": 63488 00:14:37.179 }, 00:14:37.179 { 00:14:37.179 "name": "BaseBdev4", 00:14:37.179 "uuid": "04fc5f25-8b6a-4843-836e-4a4d6865eecf", 00:14:37.179 "is_configured": true, 00:14:37.179 "data_offset": 2048, 00:14:37.179 "data_size": 63488 00:14:37.179 } 00:14:37.179 ] 00:14:37.179 }' 00:14:37.179 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.179 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.460 [2024-11-18 23:09:56.776486] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.460 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.735 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.735 "name": "Existed_Raid", 00:14:37.735 "uuid": "1d5e1561-27a2-4f96-a536-7165ab1a9ac8", 00:14:37.735 "strip_size_kb": 64, 00:14:37.735 "state": "configuring", 00:14:37.735 "raid_level": "raid5f", 00:14:37.735 "superblock": true, 00:14:37.735 "num_base_bdevs": 4, 00:14:37.735 "num_base_bdevs_discovered": 2, 00:14:37.735 "num_base_bdevs_operational": 4, 00:14:37.735 "base_bdevs_list": [ 00:14:37.735 { 00:14:37.735 "name": null, 00:14:37.735 "uuid": "fc8cccbc-fb09-4e91-be8f-7d1b72a55f39", 00:14:37.735 "is_configured": false, 00:14:37.735 "data_offset": 0, 00:14:37.735 "data_size": 63488 00:14:37.735 }, 00:14:37.735 { 00:14:37.735 "name": null, 00:14:37.735 "uuid": "81f6d106-b82e-4d5d-a588-df0c42a757d5", 00:14:37.735 "is_configured": false, 00:14:37.735 "data_offset": 0, 00:14:37.735 "data_size": 63488 00:14:37.735 }, 00:14:37.735 { 00:14:37.735 "name": "BaseBdev3", 00:14:37.735 "uuid": "a3d8e320-f4fc-43e3-9483-a5286c748c79", 00:14:37.735 "is_configured": true, 00:14:37.735 "data_offset": 2048, 00:14:37.735 "data_size": 63488 00:14:37.735 }, 00:14:37.735 { 00:14:37.735 "name": "BaseBdev4", 00:14:37.735 "uuid": "04fc5f25-8b6a-4843-836e-4a4d6865eecf", 00:14:37.735 "is_configured": true, 00:14:37.735 "data_offset": 2048, 00:14:37.735 "data_size": 63488 00:14:37.735 } 00:14:37.735 ] 00:14:37.735 }' 00:14:37.735 23:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.735 23:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.995 [2024-11-18 23:09:57.221954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.995 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.995 "name": "Existed_Raid", 00:14:37.995 "uuid": "1d5e1561-27a2-4f96-a536-7165ab1a9ac8", 00:14:37.995 "strip_size_kb": 64, 00:14:37.995 "state": "configuring", 00:14:37.995 "raid_level": "raid5f", 00:14:37.995 "superblock": true, 00:14:37.995 "num_base_bdevs": 4, 00:14:37.995 "num_base_bdevs_discovered": 3, 00:14:37.996 "num_base_bdevs_operational": 4, 00:14:37.996 "base_bdevs_list": [ 00:14:37.996 { 00:14:37.996 "name": null, 00:14:37.996 "uuid": "fc8cccbc-fb09-4e91-be8f-7d1b72a55f39", 00:14:37.996 "is_configured": false, 00:14:37.996 "data_offset": 0, 00:14:37.996 "data_size": 63488 00:14:37.996 }, 00:14:37.996 { 00:14:37.996 "name": "BaseBdev2", 00:14:37.996 "uuid": "81f6d106-b82e-4d5d-a588-df0c42a757d5", 00:14:37.996 "is_configured": true, 00:14:37.996 "data_offset": 2048, 00:14:37.996 "data_size": 63488 00:14:37.996 }, 00:14:37.996 { 00:14:37.996 "name": "BaseBdev3", 00:14:37.996 "uuid": "a3d8e320-f4fc-43e3-9483-a5286c748c79", 00:14:37.996 "is_configured": true, 00:14:37.996 "data_offset": 2048, 00:14:37.996 "data_size": 63488 00:14:37.996 }, 00:14:37.996 { 00:14:37.996 "name": "BaseBdev4", 00:14:37.996 "uuid": "04fc5f25-8b6a-4843-836e-4a4d6865eecf", 00:14:37.996 "is_configured": true, 00:14:37.996 "data_offset": 2048, 00:14:37.996 "data_size": 63488 00:14:37.996 } 00:14:37.996 ] 00:14:37.996 }' 00:14:37.996 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.996 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fc8cccbc-fb09-4e91-be8f-7d1b72a55f39 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.566 [2024-11-18 23:09:57.732046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:38.566 [2024-11-18 23:09:57.732308] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:38.566 [2024-11-18 23:09:57.732324] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:38.566 [2024-11-18 23:09:57.732580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:38.566 NewBaseBdev 00:14:38.566 [2024-11-18 23:09:57.733004] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:38.566 [2024-11-18 23:09:57.733028] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:38.566 [2024-11-18 23:09:57.733124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.566 [ 00:14:38.566 { 00:14:38.566 "name": "NewBaseBdev", 00:14:38.566 "aliases": [ 00:14:38.566 "fc8cccbc-fb09-4e91-be8f-7d1b72a55f39" 00:14:38.566 ], 00:14:38.566 "product_name": "Malloc disk", 00:14:38.566 "block_size": 512, 00:14:38.566 "num_blocks": 65536, 00:14:38.566 "uuid": "fc8cccbc-fb09-4e91-be8f-7d1b72a55f39", 00:14:38.566 "assigned_rate_limits": { 00:14:38.566 "rw_ios_per_sec": 0, 00:14:38.566 "rw_mbytes_per_sec": 0, 00:14:38.566 "r_mbytes_per_sec": 0, 00:14:38.566 "w_mbytes_per_sec": 0 00:14:38.566 }, 00:14:38.566 "claimed": true, 00:14:38.566 "claim_type": "exclusive_write", 00:14:38.566 "zoned": false, 00:14:38.566 "supported_io_types": { 00:14:38.566 "read": true, 00:14:38.566 "write": true, 00:14:38.566 "unmap": true, 00:14:38.566 "flush": true, 00:14:38.566 "reset": true, 00:14:38.566 "nvme_admin": false, 00:14:38.566 "nvme_io": false, 00:14:38.566 "nvme_io_md": false, 00:14:38.566 "write_zeroes": true, 00:14:38.566 "zcopy": true, 00:14:38.566 "get_zone_info": false, 00:14:38.566 "zone_management": false, 00:14:38.566 "zone_append": false, 00:14:38.566 "compare": false, 00:14:38.566 "compare_and_write": false, 00:14:38.566 "abort": true, 00:14:38.566 "seek_hole": false, 00:14:38.566 "seek_data": false, 00:14:38.566 "copy": true, 00:14:38.566 "nvme_iov_md": false 00:14:38.566 }, 00:14:38.566 "memory_domains": [ 00:14:38.566 { 00:14:38.566 "dma_device_id": "system", 00:14:38.566 "dma_device_type": 1 00:14:38.566 }, 00:14:38.566 { 00:14:38.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.566 "dma_device_type": 2 00:14:38.566 } 00:14:38.566 ], 00:14:38.566 "driver_specific": {} 00:14:38.566 } 00:14:38.566 ] 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.566 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.566 "name": "Existed_Raid", 00:14:38.566 "uuid": "1d5e1561-27a2-4f96-a536-7165ab1a9ac8", 00:14:38.566 "strip_size_kb": 64, 00:14:38.566 "state": "online", 00:14:38.566 "raid_level": "raid5f", 00:14:38.566 "superblock": true, 00:14:38.566 "num_base_bdevs": 4, 00:14:38.566 "num_base_bdevs_discovered": 4, 00:14:38.566 "num_base_bdevs_operational": 4, 00:14:38.566 "base_bdevs_list": [ 00:14:38.566 { 00:14:38.566 "name": "NewBaseBdev", 00:14:38.566 "uuid": "fc8cccbc-fb09-4e91-be8f-7d1b72a55f39", 00:14:38.566 "is_configured": true, 00:14:38.566 "data_offset": 2048, 00:14:38.566 "data_size": 63488 00:14:38.566 }, 00:14:38.566 { 00:14:38.566 "name": "BaseBdev2", 00:14:38.566 "uuid": "81f6d106-b82e-4d5d-a588-df0c42a757d5", 00:14:38.566 "is_configured": true, 00:14:38.566 "data_offset": 2048, 00:14:38.566 "data_size": 63488 00:14:38.567 }, 00:14:38.567 { 00:14:38.567 "name": "BaseBdev3", 00:14:38.567 "uuid": "a3d8e320-f4fc-43e3-9483-a5286c748c79", 00:14:38.567 "is_configured": true, 00:14:38.567 "data_offset": 2048, 00:14:38.567 "data_size": 63488 00:14:38.567 }, 00:14:38.567 { 00:14:38.567 "name": "BaseBdev4", 00:14:38.567 "uuid": "04fc5f25-8b6a-4843-836e-4a4d6865eecf", 00:14:38.567 "is_configured": true, 00:14:38.567 "data_offset": 2048, 00:14:38.567 "data_size": 63488 00:14:38.567 } 00:14:38.567 ] 00:14:38.567 }' 00:14:38.567 23:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.567 23:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.826 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:38.826 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:38.826 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:38.826 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:38.826 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:38.826 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:38.826 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:38.826 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:38.826 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.826 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.086 [2024-11-18 23:09:58.203546] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.086 "name": "Existed_Raid", 00:14:39.086 "aliases": [ 00:14:39.086 "1d5e1561-27a2-4f96-a536-7165ab1a9ac8" 00:14:39.086 ], 00:14:39.086 "product_name": "Raid Volume", 00:14:39.086 "block_size": 512, 00:14:39.086 "num_blocks": 190464, 00:14:39.086 "uuid": "1d5e1561-27a2-4f96-a536-7165ab1a9ac8", 00:14:39.086 "assigned_rate_limits": { 00:14:39.086 "rw_ios_per_sec": 0, 00:14:39.086 "rw_mbytes_per_sec": 0, 00:14:39.086 "r_mbytes_per_sec": 0, 00:14:39.086 "w_mbytes_per_sec": 0 00:14:39.086 }, 00:14:39.086 "claimed": false, 00:14:39.086 "zoned": false, 00:14:39.086 "supported_io_types": { 00:14:39.086 "read": true, 00:14:39.086 "write": true, 00:14:39.086 "unmap": false, 00:14:39.086 "flush": false, 00:14:39.086 "reset": true, 00:14:39.086 "nvme_admin": false, 00:14:39.086 "nvme_io": false, 00:14:39.086 "nvme_io_md": false, 00:14:39.086 "write_zeroes": true, 00:14:39.086 "zcopy": false, 00:14:39.086 "get_zone_info": false, 00:14:39.086 "zone_management": false, 00:14:39.086 "zone_append": false, 00:14:39.086 "compare": false, 00:14:39.086 "compare_and_write": false, 00:14:39.086 "abort": false, 00:14:39.086 "seek_hole": false, 00:14:39.086 "seek_data": false, 00:14:39.086 "copy": false, 00:14:39.086 "nvme_iov_md": false 00:14:39.086 }, 00:14:39.086 "driver_specific": { 00:14:39.086 "raid": { 00:14:39.086 "uuid": "1d5e1561-27a2-4f96-a536-7165ab1a9ac8", 00:14:39.086 "strip_size_kb": 64, 00:14:39.086 "state": "online", 00:14:39.086 "raid_level": "raid5f", 00:14:39.086 "superblock": true, 00:14:39.086 "num_base_bdevs": 4, 00:14:39.086 "num_base_bdevs_discovered": 4, 00:14:39.086 "num_base_bdevs_operational": 4, 00:14:39.086 "base_bdevs_list": [ 00:14:39.086 { 00:14:39.086 "name": "NewBaseBdev", 00:14:39.086 "uuid": "fc8cccbc-fb09-4e91-be8f-7d1b72a55f39", 00:14:39.086 "is_configured": true, 00:14:39.086 "data_offset": 2048, 00:14:39.086 "data_size": 63488 00:14:39.086 }, 00:14:39.086 { 00:14:39.086 "name": "BaseBdev2", 00:14:39.086 "uuid": "81f6d106-b82e-4d5d-a588-df0c42a757d5", 00:14:39.086 "is_configured": true, 00:14:39.086 "data_offset": 2048, 00:14:39.086 "data_size": 63488 00:14:39.086 }, 00:14:39.086 { 00:14:39.086 "name": "BaseBdev3", 00:14:39.086 "uuid": "a3d8e320-f4fc-43e3-9483-a5286c748c79", 00:14:39.086 "is_configured": true, 00:14:39.086 "data_offset": 2048, 00:14:39.086 "data_size": 63488 00:14:39.086 }, 00:14:39.086 { 00:14:39.086 "name": "BaseBdev4", 00:14:39.086 "uuid": "04fc5f25-8b6a-4843-836e-4a4d6865eecf", 00:14:39.086 "is_configured": true, 00:14:39.086 "data_offset": 2048, 00:14:39.086 "data_size": 63488 00:14:39.086 } 00:14:39.086 ] 00:14:39.086 } 00:14:39.086 } 00:14:39.086 }' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:39.086 BaseBdev2 00:14:39.086 BaseBdev3 00:14:39.086 BaseBdev4' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.086 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.345 [2024-11-18 23:09:58.494826] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.345 [2024-11-18 23:09:58.494891] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.345 [2024-11-18 23:09:58.494957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.345 [2024-11-18 23:09:58.495214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.345 [2024-11-18 23:09:58.495224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93860 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93860 ']' 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 93860 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93860 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93860' 00:14:39.345 killing process with pid 93860 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 93860 00:14:39.345 [2024-11-18 23:09:58.549502] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.345 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 93860 00:14:39.345 [2024-11-18 23:09:58.589316] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.609 23:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:39.609 ************************************ 00:14:39.609 00:14:39.609 real 0m9.458s 00:14:39.609 user 0m16.027s 00:14:39.609 sys 0m2.151s 00:14:39.609 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:39.609 23:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.609 END TEST raid5f_state_function_test_sb 00:14:39.609 ************************************ 00:14:39.609 23:09:58 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:39.609 23:09:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:39.609 23:09:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:39.609 23:09:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.609 ************************************ 00:14:39.609 START TEST raid5f_superblock_test 00:14:39.609 ************************************ 00:14:39.609 23:09:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:14:39.609 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:39.609 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:39.609 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:39.609 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:39.609 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:39.609 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:39.609 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:39.609 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:39.609 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:39.609 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94509 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94509 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94509 ']' 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.610 23:09:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.871 [2024-11-18 23:09:59.001084] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:39.871 [2024-11-18 23:09:59.001229] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94509 ] 00:14:39.871 [2024-11-18 23:09:59.161976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.871 [2024-11-18 23:09:59.207540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.137 [2024-11-18 23:09:59.250631] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.137 [2024-11-18 23:09:59.250663] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.707 malloc1 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.707 [2024-11-18 23:09:59.853207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:40.707 [2024-11-18 23:09:59.853352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.707 [2024-11-18 23:09:59.853397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:40.707 [2024-11-18 23:09:59.853454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.707 [2024-11-18 23:09:59.855554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.707 [2024-11-18 23:09:59.855630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:40.707 pt1 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.707 malloc2 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.707 [2024-11-18 23:09:59.903146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.707 [2024-11-18 23:09:59.903313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.707 [2024-11-18 23:09:59.903384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:40.707 [2024-11-18 23:09:59.903418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.707 [2024-11-18 23:09:59.907959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.707 [2024-11-18 23:09:59.908027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.707 pt2 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.707 malloc3 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.707 [2024-11-18 23:09:59.937639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:40.707 [2024-11-18 23:09:59.937729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.707 [2024-11-18 23:09:59.937796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:40.707 [2024-11-18 23:09:59.937825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.707 [2024-11-18 23:09:59.939921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.707 [2024-11-18 23:09:59.939991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:40.707 pt3 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:40.707 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.708 malloc4 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.708 [2024-11-18 23:09:59.970268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:40.708 [2024-11-18 23:09:59.970368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.708 [2024-11-18 23:09:59.970409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:40.708 [2024-11-18 23:09:59.970442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.708 [2024-11-18 23:09:59.972466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.708 [2024-11-18 23:09:59.972546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:40.708 pt4 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.708 [2024-11-18 23:09:59.982331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:40.708 [2024-11-18 23:09:59.984112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.708 [2024-11-18 23:09:59.984171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.708 [2024-11-18 23:09:59.984230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:40.708 [2024-11-18 23:09:59.984393] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:40.708 [2024-11-18 23:09:59.984416] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:40.708 [2024-11-18 23:09:59.984644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:40.708 [2024-11-18 23:09:59.985087] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:40.708 [2024-11-18 23:09:59.985106] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:40.708 [2024-11-18 23:09:59.985212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.708 23:09:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.708 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.708 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.708 "name": "raid_bdev1", 00:14:40.708 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:40.708 "strip_size_kb": 64, 00:14:40.708 "state": "online", 00:14:40.708 "raid_level": "raid5f", 00:14:40.708 "superblock": true, 00:14:40.708 "num_base_bdevs": 4, 00:14:40.708 "num_base_bdevs_discovered": 4, 00:14:40.708 "num_base_bdevs_operational": 4, 00:14:40.708 "base_bdevs_list": [ 00:14:40.708 { 00:14:40.708 "name": "pt1", 00:14:40.708 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.708 "is_configured": true, 00:14:40.708 "data_offset": 2048, 00:14:40.708 "data_size": 63488 00:14:40.708 }, 00:14:40.708 { 00:14:40.708 "name": "pt2", 00:14:40.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.708 "is_configured": true, 00:14:40.708 "data_offset": 2048, 00:14:40.708 "data_size": 63488 00:14:40.708 }, 00:14:40.708 { 00:14:40.708 "name": "pt3", 00:14:40.708 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.708 "is_configured": true, 00:14:40.708 "data_offset": 2048, 00:14:40.708 "data_size": 63488 00:14:40.708 }, 00:14:40.708 { 00:14:40.708 "name": "pt4", 00:14:40.708 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:40.708 "is_configured": true, 00:14:40.708 "data_offset": 2048, 00:14:40.708 "data_size": 63488 00:14:40.708 } 00:14:40.708 ] 00:14:40.708 }' 00:14:40.708 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.708 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.278 [2024-11-18 23:10:00.438324] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:41.278 "name": "raid_bdev1", 00:14:41.278 "aliases": [ 00:14:41.278 "cb8a3875-af9a-45f5-b117-05a6d1172d2b" 00:14:41.278 ], 00:14:41.278 "product_name": "Raid Volume", 00:14:41.278 "block_size": 512, 00:14:41.278 "num_blocks": 190464, 00:14:41.278 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:41.278 "assigned_rate_limits": { 00:14:41.278 "rw_ios_per_sec": 0, 00:14:41.278 "rw_mbytes_per_sec": 0, 00:14:41.278 "r_mbytes_per_sec": 0, 00:14:41.278 "w_mbytes_per_sec": 0 00:14:41.278 }, 00:14:41.278 "claimed": false, 00:14:41.278 "zoned": false, 00:14:41.278 "supported_io_types": { 00:14:41.278 "read": true, 00:14:41.278 "write": true, 00:14:41.278 "unmap": false, 00:14:41.278 "flush": false, 00:14:41.278 "reset": true, 00:14:41.278 "nvme_admin": false, 00:14:41.278 "nvme_io": false, 00:14:41.278 "nvme_io_md": false, 00:14:41.278 "write_zeroes": true, 00:14:41.278 "zcopy": false, 00:14:41.278 "get_zone_info": false, 00:14:41.278 "zone_management": false, 00:14:41.278 "zone_append": false, 00:14:41.278 "compare": false, 00:14:41.278 "compare_and_write": false, 00:14:41.278 "abort": false, 00:14:41.278 "seek_hole": false, 00:14:41.278 "seek_data": false, 00:14:41.278 "copy": false, 00:14:41.278 "nvme_iov_md": false 00:14:41.278 }, 00:14:41.278 "driver_specific": { 00:14:41.278 "raid": { 00:14:41.278 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:41.278 "strip_size_kb": 64, 00:14:41.278 "state": "online", 00:14:41.278 "raid_level": "raid5f", 00:14:41.278 "superblock": true, 00:14:41.278 "num_base_bdevs": 4, 00:14:41.278 "num_base_bdevs_discovered": 4, 00:14:41.278 "num_base_bdevs_operational": 4, 00:14:41.278 "base_bdevs_list": [ 00:14:41.278 { 00:14:41.278 "name": "pt1", 00:14:41.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.278 "is_configured": true, 00:14:41.278 "data_offset": 2048, 00:14:41.278 "data_size": 63488 00:14:41.278 }, 00:14:41.278 { 00:14:41.278 "name": "pt2", 00:14:41.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.278 "is_configured": true, 00:14:41.278 "data_offset": 2048, 00:14:41.278 "data_size": 63488 00:14:41.278 }, 00:14:41.278 { 00:14:41.278 "name": "pt3", 00:14:41.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.278 "is_configured": true, 00:14:41.278 "data_offset": 2048, 00:14:41.278 "data_size": 63488 00:14:41.278 }, 00:14:41.278 { 00:14:41.278 "name": "pt4", 00:14:41.278 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:41.278 "is_configured": true, 00:14:41.278 "data_offset": 2048, 00:14:41.278 "data_size": 63488 00:14:41.278 } 00:14:41.278 ] 00:14:41.278 } 00:14:41.278 } 00:14:41.278 }' 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:41.278 pt2 00:14:41.278 pt3 00:14:41.278 pt4' 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.278 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.537 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:41.538 [2024-11-18 23:10:00.781683] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cb8a3875-af9a-45f5-b117-05a6d1172d2b 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cb8a3875-af9a-45f5-b117-05a6d1172d2b ']' 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.538 [2024-11-18 23:10:00.829435] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.538 [2024-11-18 23:10:00.829463] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.538 [2024-11-18 23:10:00.829528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.538 [2024-11-18 23:10:00.829608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.538 [2024-11-18 23:10:00.829629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.538 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.798 [2024-11-18 23:10:00.993205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:41.798 [2024-11-18 23:10:00.994982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:41.798 [2024-11-18 23:10:00.995028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:41.798 [2024-11-18 23:10:00.995052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:41.798 [2024-11-18 23:10:00.995091] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:41.798 [2024-11-18 23:10:00.995148] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:41.798 [2024-11-18 23:10:00.995168] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:41.798 [2024-11-18 23:10:00.995184] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:41.798 [2024-11-18 23:10:00.995196] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.798 [2024-11-18 23:10:00.995206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:14:41.798 request: 00:14:41.798 { 00:14:41.798 "name": "raid_bdev1", 00:14:41.798 "raid_level": "raid5f", 00:14:41.798 "base_bdevs": [ 00:14:41.798 "malloc1", 00:14:41.798 "malloc2", 00:14:41.798 "malloc3", 00:14:41.798 "malloc4" 00:14:41.798 ], 00:14:41.798 "strip_size_kb": 64, 00:14:41.798 "superblock": false, 00:14:41.798 "method": "bdev_raid_create", 00:14:41.798 "req_id": 1 00:14:41.798 } 00:14:41.798 Got JSON-RPC error response 00:14:41.798 response: 00:14:41.798 { 00:14:41.798 "code": -17, 00:14:41.798 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:41.798 } 00:14:41.798 23:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.798 [2024-11-18 23:10:01.061049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.798 [2024-11-18 23:10:01.061088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.798 [2024-11-18 23:10:01.061106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:41.798 [2024-11-18 23:10:01.061115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.798 [2024-11-18 23:10:01.063181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.798 [2024-11-18 23:10:01.063212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.798 [2024-11-18 23:10:01.063271] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.798 [2024-11-18 23:10:01.063359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.798 pt1 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.798 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.799 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.799 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.799 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.799 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.799 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.799 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.799 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.799 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.799 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.799 "name": "raid_bdev1", 00:14:41.799 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:41.799 "strip_size_kb": 64, 00:14:41.799 "state": "configuring", 00:14:41.799 "raid_level": "raid5f", 00:14:41.799 "superblock": true, 00:14:41.799 "num_base_bdevs": 4, 00:14:41.799 "num_base_bdevs_discovered": 1, 00:14:41.799 "num_base_bdevs_operational": 4, 00:14:41.799 "base_bdevs_list": [ 00:14:41.799 { 00:14:41.799 "name": "pt1", 00:14:41.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.799 "is_configured": true, 00:14:41.799 "data_offset": 2048, 00:14:41.799 "data_size": 63488 00:14:41.799 }, 00:14:41.799 { 00:14:41.799 "name": null, 00:14:41.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.799 "is_configured": false, 00:14:41.799 "data_offset": 2048, 00:14:41.799 "data_size": 63488 00:14:41.799 }, 00:14:41.799 { 00:14:41.799 "name": null, 00:14:41.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.799 "is_configured": false, 00:14:41.799 "data_offset": 2048, 00:14:41.799 "data_size": 63488 00:14:41.799 }, 00:14:41.799 { 00:14:41.799 "name": null, 00:14:41.799 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:41.799 "is_configured": false, 00:14:41.799 "data_offset": 2048, 00:14:41.799 "data_size": 63488 00:14:41.799 } 00:14:41.799 ] 00:14:41.799 }' 00:14:41.799 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.799 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.367 [2024-11-18 23:10:01.492361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.367 [2024-11-18 23:10:01.492404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.367 [2024-11-18 23:10:01.492421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:42.367 [2024-11-18 23:10:01.492430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.367 [2024-11-18 23:10:01.492761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.367 [2024-11-18 23:10:01.492776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.367 [2024-11-18 23:10:01.492834] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:42.367 [2024-11-18 23:10:01.492850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.367 pt2 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.367 [2024-11-18 23:10:01.504348] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.367 "name": "raid_bdev1", 00:14:42.367 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:42.367 "strip_size_kb": 64, 00:14:42.367 "state": "configuring", 00:14:42.367 "raid_level": "raid5f", 00:14:42.367 "superblock": true, 00:14:42.367 "num_base_bdevs": 4, 00:14:42.367 "num_base_bdevs_discovered": 1, 00:14:42.367 "num_base_bdevs_operational": 4, 00:14:42.367 "base_bdevs_list": [ 00:14:42.367 { 00:14:42.367 "name": "pt1", 00:14:42.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.367 "is_configured": true, 00:14:42.367 "data_offset": 2048, 00:14:42.367 "data_size": 63488 00:14:42.367 }, 00:14:42.367 { 00:14:42.367 "name": null, 00:14:42.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.367 "is_configured": false, 00:14:42.367 "data_offset": 0, 00:14:42.367 "data_size": 63488 00:14:42.367 }, 00:14:42.367 { 00:14:42.367 "name": null, 00:14:42.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.367 "is_configured": false, 00:14:42.367 "data_offset": 2048, 00:14:42.367 "data_size": 63488 00:14:42.367 }, 00:14:42.367 { 00:14:42.367 "name": null, 00:14:42.367 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:42.367 "is_configured": false, 00:14:42.367 "data_offset": 2048, 00:14:42.367 "data_size": 63488 00:14:42.367 } 00:14:42.367 ] 00:14:42.367 }' 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.367 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.634 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:42.634 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.634 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.634 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.634 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.634 [2024-11-18 23:10:01.987484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.634 [2024-11-18 23:10:01.987529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.634 [2024-11-18 23:10:01.987542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:42.634 [2024-11-18 23:10:01.987552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.634 [2024-11-18 23:10:01.987869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.634 [2024-11-18 23:10:01.987886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.634 [2024-11-18 23:10:01.987938] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:42.634 [2024-11-18 23:10:01.987955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.634 pt2 00:14:42.634 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.634 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:42.634 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.634 23:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:42.634 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.634 23:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.634 [2024-11-18 23:10:01.999444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:42.634 [2024-11-18 23:10:01.999491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.634 [2024-11-18 23:10:01.999506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:42.634 [2024-11-18 23:10:01.999515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.634 [2024-11-18 23:10:01.999821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.634 [2024-11-18 23:10:01.999843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:42.634 [2024-11-18 23:10:01.999893] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:42.634 [2024-11-18 23:10:01.999911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:42.634 pt3 00:14:42.634 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.635 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:42.635 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.635 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:42.635 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.635 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.897 [2024-11-18 23:10:02.011447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:42.897 [2024-11-18 23:10:02.011492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.897 [2024-11-18 23:10:02.011506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:42.897 [2024-11-18 23:10:02.011515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.897 [2024-11-18 23:10:02.011797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.897 [2024-11-18 23:10:02.011814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:42.897 [2024-11-18 23:10:02.011858] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:42.897 [2024-11-18 23:10:02.011875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:42.897 [2024-11-18 23:10:02.011964] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:42.897 [2024-11-18 23:10:02.011975] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:42.897 [2024-11-18 23:10:02.012185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:42.897 [2024-11-18 23:10:02.012644] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:42.897 [2024-11-18 23:10:02.012662] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:14:42.897 [2024-11-18 23:10:02.012752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.897 pt4 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.897 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.898 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.898 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.898 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.898 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.898 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.898 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.898 "name": "raid_bdev1", 00:14:42.898 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:42.898 "strip_size_kb": 64, 00:14:42.898 "state": "online", 00:14:42.898 "raid_level": "raid5f", 00:14:42.898 "superblock": true, 00:14:42.898 "num_base_bdevs": 4, 00:14:42.898 "num_base_bdevs_discovered": 4, 00:14:42.898 "num_base_bdevs_operational": 4, 00:14:42.898 "base_bdevs_list": [ 00:14:42.898 { 00:14:42.898 "name": "pt1", 00:14:42.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.898 "is_configured": true, 00:14:42.898 "data_offset": 2048, 00:14:42.898 "data_size": 63488 00:14:42.898 }, 00:14:42.898 { 00:14:42.898 "name": "pt2", 00:14:42.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.898 "is_configured": true, 00:14:42.898 "data_offset": 2048, 00:14:42.898 "data_size": 63488 00:14:42.898 }, 00:14:42.898 { 00:14:42.898 "name": "pt3", 00:14:42.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.898 "is_configured": true, 00:14:42.898 "data_offset": 2048, 00:14:42.898 "data_size": 63488 00:14:42.898 }, 00:14:42.898 { 00:14:42.898 "name": "pt4", 00:14:42.898 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:42.898 "is_configured": true, 00:14:42.898 "data_offset": 2048, 00:14:42.898 "data_size": 63488 00:14:42.898 } 00:14:42.898 ] 00:14:42.898 }' 00:14:42.898 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.898 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.157 [2024-11-18 23:10:02.494812] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.157 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:43.157 "name": "raid_bdev1", 00:14:43.157 "aliases": [ 00:14:43.157 "cb8a3875-af9a-45f5-b117-05a6d1172d2b" 00:14:43.157 ], 00:14:43.157 "product_name": "Raid Volume", 00:14:43.157 "block_size": 512, 00:14:43.157 "num_blocks": 190464, 00:14:43.157 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:43.157 "assigned_rate_limits": { 00:14:43.157 "rw_ios_per_sec": 0, 00:14:43.157 "rw_mbytes_per_sec": 0, 00:14:43.157 "r_mbytes_per_sec": 0, 00:14:43.157 "w_mbytes_per_sec": 0 00:14:43.157 }, 00:14:43.157 "claimed": false, 00:14:43.157 "zoned": false, 00:14:43.157 "supported_io_types": { 00:14:43.157 "read": true, 00:14:43.157 "write": true, 00:14:43.157 "unmap": false, 00:14:43.157 "flush": false, 00:14:43.157 "reset": true, 00:14:43.157 "nvme_admin": false, 00:14:43.157 "nvme_io": false, 00:14:43.157 "nvme_io_md": false, 00:14:43.157 "write_zeroes": true, 00:14:43.157 "zcopy": false, 00:14:43.157 "get_zone_info": false, 00:14:43.158 "zone_management": false, 00:14:43.158 "zone_append": false, 00:14:43.158 "compare": false, 00:14:43.158 "compare_and_write": false, 00:14:43.158 "abort": false, 00:14:43.158 "seek_hole": false, 00:14:43.158 "seek_data": false, 00:14:43.158 "copy": false, 00:14:43.158 "nvme_iov_md": false 00:14:43.158 }, 00:14:43.158 "driver_specific": { 00:14:43.158 "raid": { 00:14:43.158 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:43.158 "strip_size_kb": 64, 00:14:43.158 "state": "online", 00:14:43.158 "raid_level": "raid5f", 00:14:43.158 "superblock": true, 00:14:43.158 "num_base_bdevs": 4, 00:14:43.158 "num_base_bdevs_discovered": 4, 00:14:43.158 "num_base_bdevs_operational": 4, 00:14:43.158 "base_bdevs_list": [ 00:14:43.158 { 00:14:43.158 "name": "pt1", 00:14:43.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.158 "is_configured": true, 00:14:43.158 "data_offset": 2048, 00:14:43.158 "data_size": 63488 00:14:43.158 }, 00:14:43.158 { 00:14:43.158 "name": "pt2", 00:14:43.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.158 "is_configured": true, 00:14:43.158 "data_offset": 2048, 00:14:43.158 "data_size": 63488 00:14:43.158 }, 00:14:43.158 { 00:14:43.158 "name": "pt3", 00:14:43.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.158 "is_configured": true, 00:14:43.158 "data_offset": 2048, 00:14:43.158 "data_size": 63488 00:14:43.158 }, 00:14:43.158 { 00:14:43.158 "name": "pt4", 00:14:43.158 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:43.158 "is_configured": true, 00:14:43.158 "data_offset": 2048, 00:14:43.158 "data_size": 63488 00:14:43.158 } 00:14:43.158 ] 00:14:43.158 } 00:14:43.158 } 00:14:43.158 }' 00:14:43.158 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:43.418 pt2 00:14:43.418 pt3 00:14:43.418 pt4' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.418 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.678 [2024-11-18 23:10:02.798323] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cb8a3875-af9a-45f5-b117-05a6d1172d2b '!=' cb8a3875-af9a-45f5-b117-05a6d1172d2b ']' 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.678 [2024-11-18 23:10:02.826117] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.678 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.679 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.679 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.679 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.679 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.679 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.679 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.679 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.679 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.679 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.679 "name": "raid_bdev1", 00:14:43.679 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:43.679 "strip_size_kb": 64, 00:14:43.679 "state": "online", 00:14:43.679 "raid_level": "raid5f", 00:14:43.679 "superblock": true, 00:14:43.679 "num_base_bdevs": 4, 00:14:43.679 "num_base_bdevs_discovered": 3, 00:14:43.679 "num_base_bdevs_operational": 3, 00:14:43.679 "base_bdevs_list": [ 00:14:43.679 { 00:14:43.679 "name": null, 00:14:43.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.679 "is_configured": false, 00:14:43.679 "data_offset": 0, 00:14:43.679 "data_size": 63488 00:14:43.679 }, 00:14:43.679 { 00:14:43.679 "name": "pt2", 00:14:43.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.679 "is_configured": true, 00:14:43.679 "data_offset": 2048, 00:14:43.679 "data_size": 63488 00:14:43.679 }, 00:14:43.679 { 00:14:43.679 "name": "pt3", 00:14:43.679 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.679 "is_configured": true, 00:14:43.679 "data_offset": 2048, 00:14:43.679 "data_size": 63488 00:14:43.679 }, 00:14:43.679 { 00:14:43.679 "name": "pt4", 00:14:43.679 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:43.679 "is_configured": true, 00:14:43.679 "data_offset": 2048, 00:14:43.679 "data_size": 63488 00:14:43.679 } 00:14:43.679 ] 00:14:43.679 }' 00:14:43.679 23:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.679 23:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.939 [2024-11-18 23:10:03.253369] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.939 [2024-11-18 23:10:03.253394] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.939 [2024-11-18 23:10:03.253449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.939 [2024-11-18 23:10:03.253507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.939 [2024-11-18 23:10:03.253518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.939 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.199 [2024-11-18 23:10:03.353192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:44.199 [2024-11-18 23:10:03.353308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.199 [2024-11-18 23:10:03.353330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:44.199 [2024-11-18 23:10:03.353340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.199 [2024-11-18 23:10:03.355403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.199 [2024-11-18 23:10:03.355441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:44.199 [2024-11-18 23:10:03.355498] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:44.199 [2024-11-18 23:10:03.355528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:44.199 pt2 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.199 "name": "raid_bdev1", 00:14:44.199 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:44.199 "strip_size_kb": 64, 00:14:44.199 "state": "configuring", 00:14:44.199 "raid_level": "raid5f", 00:14:44.199 "superblock": true, 00:14:44.199 "num_base_bdevs": 4, 00:14:44.199 "num_base_bdevs_discovered": 1, 00:14:44.199 "num_base_bdevs_operational": 3, 00:14:44.199 "base_bdevs_list": [ 00:14:44.199 { 00:14:44.199 "name": null, 00:14:44.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.199 "is_configured": false, 00:14:44.199 "data_offset": 2048, 00:14:44.199 "data_size": 63488 00:14:44.199 }, 00:14:44.199 { 00:14:44.199 "name": "pt2", 00:14:44.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.199 "is_configured": true, 00:14:44.199 "data_offset": 2048, 00:14:44.199 "data_size": 63488 00:14:44.199 }, 00:14:44.199 { 00:14:44.199 "name": null, 00:14:44.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.199 "is_configured": false, 00:14:44.199 "data_offset": 2048, 00:14:44.199 "data_size": 63488 00:14:44.199 }, 00:14:44.199 { 00:14:44.199 "name": null, 00:14:44.199 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:44.199 "is_configured": false, 00:14:44.199 "data_offset": 2048, 00:14:44.199 "data_size": 63488 00:14:44.199 } 00:14:44.199 ] 00:14:44.199 }' 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.199 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.769 [2024-11-18 23:10:03.864329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:44.769 [2024-11-18 23:10:03.864428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.769 [2024-11-18 23:10:03.864460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:44.769 [2024-11-18 23:10:03.864491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.769 [2024-11-18 23:10:03.864830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.769 [2024-11-18 23:10:03.864886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:44.769 [2024-11-18 23:10:03.864965] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:44.769 [2024-11-18 23:10:03.865021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:44.769 pt3 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.769 "name": "raid_bdev1", 00:14:44.769 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:44.769 "strip_size_kb": 64, 00:14:44.769 "state": "configuring", 00:14:44.769 "raid_level": "raid5f", 00:14:44.769 "superblock": true, 00:14:44.769 "num_base_bdevs": 4, 00:14:44.769 "num_base_bdevs_discovered": 2, 00:14:44.769 "num_base_bdevs_operational": 3, 00:14:44.769 "base_bdevs_list": [ 00:14:44.769 { 00:14:44.769 "name": null, 00:14:44.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.769 "is_configured": false, 00:14:44.769 "data_offset": 2048, 00:14:44.769 "data_size": 63488 00:14:44.769 }, 00:14:44.769 { 00:14:44.769 "name": "pt2", 00:14:44.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.769 "is_configured": true, 00:14:44.769 "data_offset": 2048, 00:14:44.769 "data_size": 63488 00:14:44.769 }, 00:14:44.769 { 00:14:44.769 "name": "pt3", 00:14:44.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.769 "is_configured": true, 00:14:44.769 "data_offset": 2048, 00:14:44.769 "data_size": 63488 00:14:44.769 }, 00:14:44.769 { 00:14:44.769 "name": null, 00:14:44.769 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:44.769 "is_configured": false, 00:14:44.769 "data_offset": 2048, 00:14:44.769 "data_size": 63488 00:14:44.769 } 00:14:44.769 ] 00:14:44.769 }' 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.769 23:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.028 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:45.028 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:45.028 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:45.028 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:45.028 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.028 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.028 [2024-11-18 23:10:04.327490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:45.028 [2024-11-18 23:10:04.327579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.028 [2024-11-18 23:10:04.327602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:45.028 [2024-11-18 23:10:04.327613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.028 [2024-11-18 23:10:04.327934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.028 [2024-11-18 23:10:04.327952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:45.028 [2024-11-18 23:10:04.328007] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:45.028 [2024-11-18 23:10:04.328025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:45.028 [2024-11-18 23:10:04.328106] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:45.028 [2024-11-18 23:10:04.328117] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:45.028 [2024-11-18 23:10:04.328340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:45.028 [2024-11-18 23:10:04.328829] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:45.028 [2024-11-18 23:10:04.328846] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:14:45.028 [2024-11-18 23:10:04.329036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.028 pt4 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.029 "name": "raid_bdev1", 00:14:45.029 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:45.029 "strip_size_kb": 64, 00:14:45.029 "state": "online", 00:14:45.029 "raid_level": "raid5f", 00:14:45.029 "superblock": true, 00:14:45.029 "num_base_bdevs": 4, 00:14:45.029 "num_base_bdevs_discovered": 3, 00:14:45.029 "num_base_bdevs_operational": 3, 00:14:45.029 "base_bdevs_list": [ 00:14:45.029 { 00:14:45.029 "name": null, 00:14:45.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.029 "is_configured": false, 00:14:45.029 "data_offset": 2048, 00:14:45.029 "data_size": 63488 00:14:45.029 }, 00:14:45.029 { 00:14:45.029 "name": "pt2", 00:14:45.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.029 "is_configured": true, 00:14:45.029 "data_offset": 2048, 00:14:45.029 "data_size": 63488 00:14:45.029 }, 00:14:45.029 { 00:14:45.029 "name": "pt3", 00:14:45.029 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.029 "is_configured": true, 00:14:45.029 "data_offset": 2048, 00:14:45.029 "data_size": 63488 00:14:45.029 }, 00:14:45.029 { 00:14:45.029 "name": "pt4", 00:14:45.029 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:45.029 "is_configured": true, 00:14:45.029 "data_offset": 2048, 00:14:45.029 "data_size": 63488 00:14:45.029 } 00:14:45.029 ] 00:14:45.029 }' 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.029 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.599 [2024-11-18 23:10:04.710840] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.599 [2024-11-18 23:10:04.710866] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.599 [2024-11-18 23:10:04.710915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.599 [2024-11-18 23:10:04.710976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.599 [2024-11-18 23:10:04.710985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.599 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.599 [2024-11-18 23:10:04.782728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:45.599 [2024-11-18 23:10:04.782816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.599 [2024-11-18 23:10:04.782838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:45.599 [2024-11-18 23:10:04.782847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.600 [2024-11-18 23:10:04.785040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.600 [2024-11-18 23:10:04.785074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:45.600 [2024-11-18 23:10:04.785128] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:45.600 [2024-11-18 23:10:04.785165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:45.600 [2024-11-18 23:10:04.785257] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:45.600 [2024-11-18 23:10:04.785268] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.600 [2024-11-18 23:10:04.785307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:14:45.600 [2024-11-18 23:10:04.785356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:45.600 [2024-11-18 23:10:04.785460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:45.600 pt1 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.600 "name": "raid_bdev1", 00:14:45.600 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:45.600 "strip_size_kb": 64, 00:14:45.600 "state": "configuring", 00:14:45.600 "raid_level": "raid5f", 00:14:45.600 "superblock": true, 00:14:45.600 "num_base_bdevs": 4, 00:14:45.600 "num_base_bdevs_discovered": 2, 00:14:45.600 "num_base_bdevs_operational": 3, 00:14:45.600 "base_bdevs_list": [ 00:14:45.600 { 00:14:45.600 "name": null, 00:14:45.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.600 "is_configured": false, 00:14:45.600 "data_offset": 2048, 00:14:45.600 "data_size": 63488 00:14:45.600 }, 00:14:45.600 { 00:14:45.600 "name": "pt2", 00:14:45.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.600 "is_configured": true, 00:14:45.600 "data_offset": 2048, 00:14:45.600 "data_size": 63488 00:14:45.600 }, 00:14:45.600 { 00:14:45.600 "name": "pt3", 00:14:45.600 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.600 "is_configured": true, 00:14:45.600 "data_offset": 2048, 00:14:45.600 "data_size": 63488 00:14:45.600 }, 00:14:45.600 { 00:14:45.600 "name": null, 00:14:45.600 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:45.600 "is_configured": false, 00:14:45.600 "data_offset": 2048, 00:14:45.600 "data_size": 63488 00:14:45.600 } 00:14:45.600 ] 00:14:45.600 }' 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.600 23:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.860 [2024-11-18 23:10:05.225962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:45.860 [2024-11-18 23:10:05.226060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.860 [2024-11-18 23:10:05.226092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:45.860 [2024-11-18 23:10:05.226122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.860 [2024-11-18 23:10:05.226482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.860 [2024-11-18 23:10:05.226539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:45.860 [2024-11-18 23:10:05.226617] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:45.860 [2024-11-18 23:10:05.226664] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:45.860 [2024-11-18 23:10:05.226765] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:45.860 [2024-11-18 23:10:05.226806] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:45.860 [2024-11-18 23:10:05.227044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:45.860 [2024-11-18 23:10:05.227600] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:45.860 [2024-11-18 23:10:05.227651] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:45.860 [2024-11-18 23:10:05.227850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.860 pt4 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.860 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.120 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.120 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.120 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.120 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.120 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.120 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.120 "name": "raid_bdev1", 00:14:46.120 "uuid": "cb8a3875-af9a-45f5-b117-05a6d1172d2b", 00:14:46.120 "strip_size_kb": 64, 00:14:46.120 "state": "online", 00:14:46.120 "raid_level": "raid5f", 00:14:46.120 "superblock": true, 00:14:46.120 "num_base_bdevs": 4, 00:14:46.120 "num_base_bdevs_discovered": 3, 00:14:46.120 "num_base_bdevs_operational": 3, 00:14:46.120 "base_bdevs_list": [ 00:14:46.120 { 00:14:46.120 "name": null, 00:14:46.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.120 "is_configured": false, 00:14:46.120 "data_offset": 2048, 00:14:46.120 "data_size": 63488 00:14:46.120 }, 00:14:46.120 { 00:14:46.120 "name": "pt2", 00:14:46.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.120 "is_configured": true, 00:14:46.120 "data_offset": 2048, 00:14:46.120 "data_size": 63488 00:14:46.120 }, 00:14:46.120 { 00:14:46.120 "name": "pt3", 00:14:46.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:46.120 "is_configured": true, 00:14:46.120 "data_offset": 2048, 00:14:46.120 "data_size": 63488 00:14:46.120 }, 00:14:46.120 { 00:14:46.120 "name": "pt4", 00:14:46.120 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:46.120 "is_configured": true, 00:14:46.120 "data_offset": 2048, 00:14:46.120 "data_size": 63488 00:14:46.120 } 00:14:46.120 ] 00:14:46.120 }' 00:14:46.120 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.120 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.379 [2024-11-18 23:10:05.693381] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cb8a3875-af9a-45f5-b117-05a6d1172d2b '!=' cb8a3875-af9a-45f5-b117-05a6d1172d2b ']' 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94509 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94509 ']' 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94509 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.379 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94509 00:14:46.638 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:46.638 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:46.638 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94509' 00:14:46.638 killing process with pid 94509 00:14:46.638 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94509 00:14:46.638 [2024-11-18 23:10:05.775243] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.638 [2024-11-18 23:10:05.775320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.638 [2024-11-18 23:10:05.775418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.638 [2024-11-18 23:10:05.775430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:46.638 23:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94509 00:14:46.638 [2024-11-18 23:10:05.818276] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.898 23:10:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:46.898 00:14:46.898 real 0m7.149s 00:14:46.898 user 0m11.943s 00:14:46.898 sys 0m1.640s 00:14:46.898 23:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:46.898 23:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.898 ************************************ 00:14:46.898 END TEST raid5f_superblock_test 00:14:46.898 ************************************ 00:14:46.898 23:10:06 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:46.898 23:10:06 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:46.898 23:10:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:46.898 23:10:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:46.898 23:10:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.898 ************************************ 00:14:46.898 START TEST raid5f_rebuild_test 00:14:46.898 ************************************ 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94978 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94978 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 94978 ']' 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.898 23:10:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.898 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:46.898 Zero copy mechanism will not be used. 00:14:46.898 [2024-11-18 23:10:06.256333] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:46.898 [2024-11-18 23:10:06.256463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94978 ] 00:14:47.159 [2024-11-18 23:10:06.420203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.159 [2024-11-18 23:10:06.466953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.159 [2024-11-18 23:10:06.509991] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.159 [2024-11-18 23:10:06.510027] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.730 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:47.730 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:47.730 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.730 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:47.730 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.730 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.730 BaseBdev1_malloc 00:14:47.730 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.730 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:47.730 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.730 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.730 [2024-11-18 23:10:07.096418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:47.730 [2024-11-18 23:10:07.096477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.730 [2024-11-18 23:10:07.096522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:47.730 [2024-11-18 23:10:07.096537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.730 [2024-11-18 23:10:07.098757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.730 [2024-11-18 23:10:07.098793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:47.730 BaseBdev1 00:14:47.730 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.731 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.731 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:47.731 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.731 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.992 BaseBdev2_malloc 00:14:47.992 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.992 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:47.992 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.992 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.992 [2024-11-18 23:10:07.143359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:47.992 [2024-11-18 23:10:07.143462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.992 [2024-11-18 23:10:07.143506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:47.992 [2024-11-18 23:10:07.143526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.993 [2024-11-18 23:10:07.148199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.993 [2024-11-18 23:10:07.148270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:47.993 BaseBdev2 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.993 BaseBdev3_malloc 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.993 [2024-11-18 23:10:07.174580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:47.993 [2024-11-18 23:10:07.174665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.993 [2024-11-18 23:10:07.174719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:47.993 [2024-11-18 23:10:07.174747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.993 [2024-11-18 23:10:07.176787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.993 [2024-11-18 23:10:07.176854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:47.993 BaseBdev3 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.993 BaseBdev4_malloc 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.993 [2024-11-18 23:10:07.203406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:47.993 [2024-11-18 23:10:07.203495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.993 [2024-11-18 23:10:07.203525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:47.993 [2024-11-18 23:10:07.203533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.993 [2024-11-18 23:10:07.205669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.993 [2024-11-18 23:10:07.205701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:47.993 BaseBdev4 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.993 spare_malloc 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.993 spare_delay 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.993 [2024-11-18 23:10:07.244044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:47.993 [2024-11-18 23:10:07.244093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.993 [2024-11-18 23:10:07.244115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:47.993 [2024-11-18 23:10:07.244123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.993 [2024-11-18 23:10:07.246090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.993 [2024-11-18 23:10:07.246125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:47.993 spare 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.993 [2024-11-18 23:10:07.256102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.993 [2024-11-18 23:10:07.257882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.993 [2024-11-18 23:10:07.257952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.993 [2024-11-18 23:10:07.257994] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:47.993 [2024-11-18 23:10:07.258073] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:47.993 [2024-11-18 23:10:07.258082] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:47.993 [2024-11-18 23:10:07.258371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:47.993 [2024-11-18 23:10:07.258834] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:47.993 [2024-11-18 23:10:07.258853] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:47.993 [2024-11-18 23:10:07.258972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.993 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.993 "name": "raid_bdev1", 00:14:47.993 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:47.993 "strip_size_kb": 64, 00:14:47.993 "state": "online", 00:14:47.993 "raid_level": "raid5f", 00:14:47.993 "superblock": false, 00:14:47.993 "num_base_bdevs": 4, 00:14:47.993 "num_base_bdevs_discovered": 4, 00:14:47.993 "num_base_bdevs_operational": 4, 00:14:47.993 "base_bdevs_list": [ 00:14:47.993 { 00:14:47.993 "name": "BaseBdev1", 00:14:47.993 "uuid": "b6dcd93c-0217-568e-a550-d50beb285123", 00:14:47.993 "is_configured": true, 00:14:47.993 "data_offset": 0, 00:14:47.993 "data_size": 65536 00:14:47.993 }, 00:14:47.993 { 00:14:47.993 "name": "BaseBdev2", 00:14:47.993 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:47.993 "is_configured": true, 00:14:47.993 "data_offset": 0, 00:14:47.993 "data_size": 65536 00:14:47.993 }, 00:14:47.993 { 00:14:47.993 "name": "BaseBdev3", 00:14:47.993 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:47.993 "is_configured": true, 00:14:47.993 "data_offset": 0, 00:14:47.993 "data_size": 65536 00:14:47.993 }, 00:14:47.993 { 00:14:47.993 "name": "BaseBdev4", 00:14:47.993 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:47.993 "is_configured": true, 00:14:47.993 "data_offset": 0, 00:14:47.993 "data_size": 65536 00:14:47.993 } 00:14:47.993 ] 00:14:47.993 }' 00:14:47.994 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.994 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:48.564 [2024-11-18 23:10:07.732065] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.564 23:10:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:48.824 [2024-11-18 23:10:07.975526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:48.824 /dev/nbd0 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.824 1+0 records in 00:14:48.824 1+0 records out 00:14:48.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425023 s, 9.6 MB/s 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:48.824 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:49.393 512+0 records in 00:14:49.393 512+0 records out 00:14:49.393 100663296 bytes (101 MB, 96 MiB) copied, 0.686087 s, 147 MB/s 00:14:49.393 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:49.393 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:49.393 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:49.393 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:49.393 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:49.393 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:49.393 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:49.652 [2024-11-18 23:10:08.950331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.652 [2024-11-18 23:10:08.974351] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.652 23:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.652 23:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.914 23:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.914 "name": "raid_bdev1", 00:14:49.914 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:49.914 "strip_size_kb": 64, 00:14:49.914 "state": "online", 00:14:49.914 "raid_level": "raid5f", 00:14:49.914 "superblock": false, 00:14:49.914 "num_base_bdevs": 4, 00:14:49.914 "num_base_bdevs_discovered": 3, 00:14:49.914 "num_base_bdevs_operational": 3, 00:14:49.914 "base_bdevs_list": [ 00:14:49.914 { 00:14:49.914 "name": null, 00:14:49.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.914 "is_configured": false, 00:14:49.914 "data_offset": 0, 00:14:49.914 "data_size": 65536 00:14:49.914 }, 00:14:49.914 { 00:14:49.914 "name": "BaseBdev2", 00:14:49.914 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:49.914 "is_configured": true, 00:14:49.914 "data_offset": 0, 00:14:49.914 "data_size": 65536 00:14:49.914 }, 00:14:49.914 { 00:14:49.914 "name": "BaseBdev3", 00:14:49.914 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:49.914 "is_configured": true, 00:14:49.914 "data_offset": 0, 00:14:49.914 "data_size": 65536 00:14:49.914 }, 00:14:49.914 { 00:14:49.914 "name": "BaseBdev4", 00:14:49.914 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:49.914 "is_configured": true, 00:14:49.914 "data_offset": 0, 00:14:49.914 "data_size": 65536 00:14:49.914 } 00:14:49.914 ] 00:14:49.914 }' 00:14:49.914 23:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.914 23:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.177 23:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:50.177 23:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.177 23:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.177 [2024-11-18 23:10:09.469508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.177 [2024-11-18 23:10:09.472935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:14:50.177 [2024-11-18 23:10:09.475098] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.177 23:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.177 23:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:51.115 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.115 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.115 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.115 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.115 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.115 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.115 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.115 23:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.115 23:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.376 "name": "raid_bdev1", 00:14:51.376 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:51.376 "strip_size_kb": 64, 00:14:51.376 "state": "online", 00:14:51.376 "raid_level": "raid5f", 00:14:51.376 "superblock": false, 00:14:51.376 "num_base_bdevs": 4, 00:14:51.376 "num_base_bdevs_discovered": 4, 00:14:51.376 "num_base_bdevs_operational": 4, 00:14:51.376 "process": { 00:14:51.376 "type": "rebuild", 00:14:51.376 "target": "spare", 00:14:51.376 "progress": { 00:14:51.376 "blocks": 19200, 00:14:51.376 "percent": 9 00:14:51.376 } 00:14:51.376 }, 00:14:51.376 "base_bdevs_list": [ 00:14:51.376 { 00:14:51.376 "name": "spare", 00:14:51.376 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:14:51.376 "is_configured": true, 00:14:51.376 "data_offset": 0, 00:14:51.376 "data_size": 65536 00:14:51.376 }, 00:14:51.376 { 00:14:51.376 "name": "BaseBdev2", 00:14:51.376 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:51.376 "is_configured": true, 00:14:51.376 "data_offset": 0, 00:14:51.376 "data_size": 65536 00:14:51.376 }, 00:14:51.376 { 00:14:51.376 "name": "BaseBdev3", 00:14:51.376 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:51.376 "is_configured": true, 00:14:51.376 "data_offset": 0, 00:14:51.376 "data_size": 65536 00:14:51.376 }, 00:14:51.376 { 00:14:51.376 "name": "BaseBdev4", 00:14:51.376 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:51.376 "is_configured": true, 00:14:51.376 "data_offset": 0, 00:14:51.376 "data_size": 65536 00:14:51.376 } 00:14:51.376 ] 00:14:51.376 }' 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.376 [2024-11-18 23:10:10.637604] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.376 [2024-11-18 23:10:10.680364] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.376 [2024-11-18 23:10:10.680461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.376 [2024-11-18 23:10:10.680517] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.376 [2024-11-18 23:10:10.680539] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.376 "name": "raid_bdev1", 00:14:51.376 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:51.376 "strip_size_kb": 64, 00:14:51.376 "state": "online", 00:14:51.376 "raid_level": "raid5f", 00:14:51.376 "superblock": false, 00:14:51.376 "num_base_bdevs": 4, 00:14:51.376 "num_base_bdevs_discovered": 3, 00:14:51.376 "num_base_bdevs_operational": 3, 00:14:51.376 "base_bdevs_list": [ 00:14:51.376 { 00:14:51.376 "name": null, 00:14:51.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.376 "is_configured": false, 00:14:51.376 "data_offset": 0, 00:14:51.376 "data_size": 65536 00:14:51.376 }, 00:14:51.376 { 00:14:51.376 "name": "BaseBdev2", 00:14:51.376 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:51.376 "is_configured": true, 00:14:51.376 "data_offset": 0, 00:14:51.376 "data_size": 65536 00:14:51.376 }, 00:14:51.376 { 00:14:51.376 "name": "BaseBdev3", 00:14:51.376 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:51.376 "is_configured": true, 00:14:51.376 "data_offset": 0, 00:14:51.376 "data_size": 65536 00:14:51.376 }, 00:14:51.376 { 00:14:51.376 "name": "BaseBdev4", 00:14:51.376 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:51.376 "is_configured": true, 00:14:51.376 "data_offset": 0, 00:14:51.376 "data_size": 65536 00:14:51.376 } 00:14:51.376 ] 00:14:51.376 }' 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.376 23:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.948 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:51.948 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.948 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:51.948 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:51.948 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.948 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.948 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.948 23:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.948 23:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.948 23:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.948 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.948 "name": "raid_bdev1", 00:14:51.948 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:51.948 "strip_size_kb": 64, 00:14:51.948 "state": "online", 00:14:51.948 "raid_level": "raid5f", 00:14:51.948 "superblock": false, 00:14:51.948 "num_base_bdevs": 4, 00:14:51.949 "num_base_bdevs_discovered": 3, 00:14:51.949 "num_base_bdevs_operational": 3, 00:14:51.949 "base_bdevs_list": [ 00:14:51.949 { 00:14:51.949 "name": null, 00:14:51.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.949 "is_configured": false, 00:14:51.949 "data_offset": 0, 00:14:51.949 "data_size": 65536 00:14:51.949 }, 00:14:51.949 { 00:14:51.949 "name": "BaseBdev2", 00:14:51.949 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:51.949 "is_configured": true, 00:14:51.949 "data_offset": 0, 00:14:51.949 "data_size": 65536 00:14:51.949 }, 00:14:51.949 { 00:14:51.949 "name": "BaseBdev3", 00:14:51.949 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:51.949 "is_configured": true, 00:14:51.949 "data_offset": 0, 00:14:51.949 "data_size": 65536 00:14:51.949 }, 00:14:51.949 { 00:14:51.949 "name": "BaseBdev4", 00:14:51.949 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:51.949 "is_configured": true, 00:14:51.949 "data_offset": 0, 00:14:51.949 "data_size": 65536 00:14:51.949 } 00:14:51.949 ] 00:14:51.949 }' 00:14:51.949 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.949 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.949 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.949 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.949 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:51.949 23:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.949 23:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.949 [2024-11-18 23:10:11.272826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.949 [2024-11-18 23:10:11.276098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:51.949 [2024-11-18 23:10:11.278240] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.949 23:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.949 23:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.331 "name": "raid_bdev1", 00:14:53.331 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:53.331 "strip_size_kb": 64, 00:14:53.331 "state": "online", 00:14:53.331 "raid_level": "raid5f", 00:14:53.331 "superblock": false, 00:14:53.331 "num_base_bdevs": 4, 00:14:53.331 "num_base_bdevs_discovered": 4, 00:14:53.331 "num_base_bdevs_operational": 4, 00:14:53.331 "process": { 00:14:53.331 "type": "rebuild", 00:14:53.331 "target": "spare", 00:14:53.331 "progress": { 00:14:53.331 "blocks": 19200, 00:14:53.331 "percent": 9 00:14:53.331 } 00:14:53.331 }, 00:14:53.331 "base_bdevs_list": [ 00:14:53.331 { 00:14:53.331 "name": "spare", 00:14:53.331 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:14:53.331 "is_configured": true, 00:14:53.331 "data_offset": 0, 00:14:53.331 "data_size": 65536 00:14:53.331 }, 00:14:53.331 { 00:14:53.331 "name": "BaseBdev2", 00:14:53.331 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:53.331 "is_configured": true, 00:14:53.331 "data_offset": 0, 00:14:53.331 "data_size": 65536 00:14:53.331 }, 00:14:53.331 { 00:14:53.331 "name": "BaseBdev3", 00:14:53.331 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:53.331 "is_configured": true, 00:14:53.331 "data_offset": 0, 00:14:53.331 "data_size": 65536 00:14:53.331 }, 00:14:53.331 { 00:14:53.331 "name": "BaseBdev4", 00:14:53.331 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:53.331 "is_configured": true, 00:14:53.331 "data_offset": 0, 00:14:53.331 "data_size": 65536 00:14:53.331 } 00:14:53.331 ] 00:14:53.331 }' 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=508 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.331 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.332 "name": "raid_bdev1", 00:14:53.332 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:53.332 "strip_size_kb": 64, 00:14:53.332 "state": "online", 00:14:53.332 "raid_level": "raid5f", 00:14:53.332 "superblock": false, 00:14:53.332 "num_base_bdevs": 4, 00:14:53.332 "num_base_bdevs_discovered": 4, 00:14:53.332 "num_base_bdevs_operational": 4, 00:14:53.332 "process": { 00:14:53.332 "type": "rebuild", 00:14:53.332 "target": "spare", 00:14:53.332 "progress": { 00:14:53.332 "blocks": 21120, 00:14:53.332 "percent": 10 00:14:53.332 } 00:14:53.332 }, 00:14:53.332 "base_bdevs_list": [ 00:14:53.332 { 00:14:53.332 "name": "spare", 00:14:53.332 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:14:53.332 "is_configured": true, 00:14:53.332 "data_offset": 0, 00:14:53.332 "data_size": 65536 00:14:53.332 }, 00:14:53.332 { 00:14:53.332 "name": "BaseBdev2", 00:14:53.332 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:53.332 "is_configured": true, 00:14:53.332 "data_offset": 0, 00:14:53.332 "data_size": 65536 00:14:53.332 }, 00:14:53.332 { 00:14:53.332 "name": "BaseBdev3", 00:14:53.332 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:53.332 "is_configured": true, 00:14:53.332 "data_offset": 0, 00:14:53.332 "data_size": 65536 00:14:53.332 }, 00:14:53.332 { 00:14:53.332 "name": "BaseBdev4", 00:14:53.332 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:53.332 "is_configured": true, 00:14:53.332 "data_offset": 0, 00:14:53.332 "data_size": 65536 00:14:53.332 } 00:14:53.332 ] 00:14:53.332 }' 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.332 23:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.272 "name": "raid_bdev1", 00:14:54.272 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:54.272 "strip_size_kb": 64, 00:14:54.272 "state": "online", 00:14:54.272 "raid_level": "raid5f", 00:14:54.272 "superblock": false, 00:14:54.272 "num_base_bdevs": 4, 00:14:54.272 "num_base_bdevs_discovered": 4, 00:14:54.272 "num_base_bdevs_operational": 4, 00:14:54.272 "process": { 00:14:54.272 "type": "rebuild", 00:14:54.272 "target": "spare", 00:14:54.272 "progress": { 00:14:54.272 "blocks": 44160, 00:14:54.272 "percent": 22 00:14:54.272 } 00:14:54.272 }, 00:14:54.272 "base_bdevs_list": [ 00:14:54.272 { 00:14:54.272 "name": "spare", 00:14:54.272 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:14:54.272 "is_configured": true, 00:14:54.272 "data_offset": 0, 00:14:54.272 "data_size": 65536 00:14:54.272 }, 00:14:54.272 { 00:14:54.272 "name": "BaseBdev2", 00:14:54.272 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:54.272 "is_configured": true, 00:14:54.272 "data_offset": 0, 00:14:54.272 "data_size": 65536 00:14:54.272 }, 00:14:54.272 { 00:14:54.272 "name": "BaseBdev3", 00:14:54.272 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:54.272 "is_configured": true, 00:14:54.272 "data_offset": 0, 00:14:54.272 "data_size": 65536 00:14:54.272 }, 00:14:54.272 { 00:14:54.272 "name": "BaseBdev4", 00:14:54.272 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:54.272 "is_configured": true, 00:14:54.272 "data_offset": 0, 00:14:54.272 "data_size": 65536 00:14:54.272 } 00:14:54.272 ] 00:14:54.272 }' 00:14:54.272 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.554 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.554 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.554 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.554 23:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.522 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.522 "name": "raid_bdev1", 00:14:55.522 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:55.522 "strip_size_kb": 64, 00:14:55.522 "state": "online", 00:14:55.522 "raid_level": "raid5f", 00:14:55.522 "superblock": false, 00:14:55.522 "num_base_bdevs": 4, 00:14:55.522 "num_base_bdevs_discovered": 4, 00:14:55.522 "num_base_bdevs_operational": 4, 00:14:55.522 "process": { 00:14:55.522 "type": "rebuild", 00:14:55.522 "target": "spare", 00:14:55.522 "progress": { 00:14:55.522 "blocks": 65280, 00:14:55.522 "percent": 33 00:14:55.522 } 00:14:55.522 }, 00:14:55.522 "base_bdevs_list": [ 00:14:55.522 { 00:14:55.522 "name": "spare", 00:14:55.522 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:14:55.522 "is_configured": true, 00:14:55.522 "data_offset": 0, 00:14:55.522 "data_size": 65536 00:14:55.522 }, 00:14:55.522 { 00:14:55.522 "name": "BaseBdev2", 00:14:55.522 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:55.522 "is_configured": true, 00:14:55.522 "data_offset": 0, 00:14:55.522 "data_size": 65536 00:14:55.522 }, 00:14:55.522 { 00:14:55.522 "name": "BaseBdev3", 00:14:55.522 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:55.522 "is_configured": true, 00:14:55.522 "data_offset": 0, 00:14:55.522 "data_size": 65536 00:14:55.522 }, 00:14:55.522 { 00:14:55.522 "name": "BaseBdev4", 00:14:55.522 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:55.522 "is_configured": true, 00:14:55.522 "data_offset": 0, 00:14:55.522 "data_size": 65536 00:14:55.522 } 00:14:55.522 ] 00:14:55.522 }' 00:14:55.523 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.523 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.523 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.523 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.523 23:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.913 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.913 "name": "raid_bdev1", 00:14:56.913 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:56.913 "strip_size_kb": 64, 00:14:56.913 "state": "online", 00:14:56.913 "raid_level": "raid5f", 00:14:56.913 "superblock": false, 00:14:56.913 "num_base_bdevs": 4, 00:14:56.913 "num_base_bdevs_discovered": 4, 00:14:56.913 "num_base_bdevs_operational": 4, 00:14:56.913 "process": { 00:14:56.913 "type": "rebuild", 00:14:56.913 "target": "spare", 00:14:56.913 "progress": { 00:14:56.913 "blocks": 88320, 00:14:56.913 "percent": 44 00:14:56.913 } 00:14:56.913 }, 00:14:56.913 "base_bdevs_list": [ 00:14:56.913 { 00:14:56.913 "name": "spare", 00:14:56.913 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:14:56.913 "is_configured": true, 00:14:56.913 "data_offset": 0, 00:14:56.913 "data_size": 65536 00:14:56.913 }, 00:14:56.913 { 00:14:56.913 "name": "BaseBdev2", 00:14:56.913 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:56.913 "is_configured": true, 00:14:56.913 "data_offset": 0, 00:14:56.913 "data_size": 65536 00:14:56.913 }, 00:14:56.913 { 00:14:56.913 "name": "BaseBdev3", 00:14:56.913 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:56.913 "is_configured": true, 00:14:56.913 "data_offset": 0, 00:14:56.913 "data_size": 65536 00:14:56.913 }, 00:14:56.913 { 00:14:56.913 "name": "BaseBdev4", 00:14:56.913 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:56.913 "is_configured": true, 00:14:56.913 "data_offset": 0, 00:14:56.913 "data_size": 65536 00:14:56.913 } 00:14:56.913 ] 00:14:56.913 }' 00:14:56.914 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.914 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.914 23:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.914 23:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.914 23:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.853 "name": "raid_bdev1", 00:14:57.853 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:57.853 "strip_size_kb": 64, 00:14:57.853 "state": "online", 00:14:57.853 "raid_level": "raid5f", 00:14:57.853 "superblock": false, 00:14:57.853 "num_base_bdevs": 4, 00:14:57.853 "num_base_bdevs_discovered": 4, 00:14:57.853 "num_base_bdevs_operational": 4, 00:14:57.853 "process": { 00:14:57.853 "type": "rebuild", 00:14:57.853 "target": "spare", 00:14:57.853 "progress": { 00:14:57.853 "blocks": 109440, 00:14:57.853 "percent": 55 00:14:57.853 } 00:14:57.853 }, 00:14:57.853 "base_bdevs_list": [ 00:14:57.853 { 00:14:57.853 "name": "spare", 00:14:57.853 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:14:57.853 "is_configured": true, 00:14:57.853 "data_offset": 0, 00:14:57.853 "data_size": 65536 00:14:57.853 }, 00:14:57.853 { 00:14:57.853 "name": "BaseBdev2", 00:14:57.853 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:57.853 "is_configured": true, 00:14:57.853 "data_offset": 0, 00:14:57.853 "data_size": 65536 00:14:57.853 }, 00:14:57.853 { 00:14:57.853 "name": "BaseBdev3", 00:14:57.853 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:57.853 "is_configured": true, 00:14:57.853 "data_offset": 0, 00:14:57.853 "data_size": 65536 00:14:57.853 }, 00:14:57.853 { 00:14:57.853 "name": "BaseBdev4", 00:14:57.853 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:57.853 "is_configured": true, 00:14:57.853 "data_offset": 0, 00:14:57.853 "data_size": 65536 00:14:57.853 } 00:14:57.853 ] 00:14:57.853 }' 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.853 23:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.233 "name": "raid_bdev1", 00:14:59.233 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:14:59.233 "strip_size_kb": 64, 00:14:59.233 "state": "online", 00:14:59.233 "raid_level": "raid5f", 00:14:59.233 "superblock": false, 00:14:59.233 "num_base_bdevs": 4, 00:14:59.233 "num_base_bdevs_discovered": 4, 00:14:59.233 "num_base_bdevs_operational": 4, 00:14:59.233 "process": { 00:14:59.233 "type": "rebuild", 00:14:59.233 "target": "spare", 00:14:59.233 "progress": { 00:14:59.233 "blocks": 132480, 00:14:59.233 "percent": 67 00:14:59.233 } 00:14:59.233 }, 00:14:59.233 "base_bdevs_list": [ 00:14:59.233 { 00:14:59.233 "name": "spare", 00:14:59.233 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:14:59.233 "is_configured": true, 00:14:59.233 "data_offset": 0, 00:14:59.233 "data_size": 65536 00:14:59.233 }, 00:14:59.233 { 00:14:59.233 "name": "BaseBdev2", 00:14:59.233 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:14:59.233 "is_configured": true, 00:14:59.233 "data_offset": 0, 00:14:59.233 "data_size": 65536 00:14:59.233 }, 00:14:59.233 { 00:14:59.233 "name": "BaseBdev3", 00:14:59.233 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:14:59.233 "is_configured": true, 00:14:59.233 "data_offset": 0, 00:14:59.233 "data_size": 65536 00:14:59.233 }, 00:14:59.233 { 00:14:59.233 "name": "BaseBdev4", 00:14:59.233 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:14:59.233 "is_configured": true, 00:14:59.233 "data_offset": 0, 00:14:59.233 "data_size": 65536 00:14:59.233 } 00:14:59.233 ] 00:14:59.233 }' 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.233 23:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.173 "name": "raid_bdev1", 00:15:00.173 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:15:00.173 "strip_size_kb": 64, 00:15:00.173 "state": "online", 00:15:00.173 "raid_level": "raid5f", 00:15:00.173 "superblock": false, 00:15:00.173 "num_base_bdevs": 4, 00:15:00.173 "num_base_bdevs_discovered": 4, 00:15:00.173 "num_base_bdevs_operational": 4, 00:15:00.173 "process": { 00:15:00.173 "type": "rebuild", 00:15:00.173 "target": "spare", 00:15:00.173 "progress": { 00:15:00.173 "blocks": 153600, 00:15:00.173 "percent": 78 00:15:00.173 } 00:15:00.173 }, 00:15:00.173 "base_bdevs_list": [ 00:15:00.173 { 00:15:00.173 "name": "spare", 00:15:00.173 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:15:00.173 "is_configured": true, 00:15:00.173 "data_offset": 0, 00:15:00.173 "data_size": 65536 00:15:00.173 }, 00:15:00.173 { 00:15:00.173 "name": "BaseBdev2", 00:15:00.173 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:15:00.173 "is_configured": true, 00:15:00.173 "data_offset": 0, 00:15:00.173 "data_size": 65536 00:15:00.173 }, 00:15:00.173 { 00:15:00.173 "name": "BaseBdev3", 00:15:00.173 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:15:00.173 "is_configured": true, 00:15:00.173 "data_offset": 0, 00:15:00.173 "data_size": 65536 00:15:00.173 }, 00:15:00.173 { 00:15:00.173 "name": "BaseBdev4", 00:15:00.173 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:15:00.173 "is_configured": true, 00:15:00.173 "data_offset": 0, 00:15:00.173 "data_size": 65536 00:15:00.173 } 00:15:00.173 ] 00:15:00.173 }' 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.173 23:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.554 "name": "raid_bdev1", 00:15:01.554 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:15:01.554 "strip_size_kb": 64, 00:15:01.554 "state": "online", 00:15:01.554 "raid_level": "raid5f", 00:15:01.554 "superblock": false, 00:15:01.554 "num_base_bdevs": 4, 00:15:01.554 "num_base_bdevs_discovered": 4, 00:15:01.554 "num_base_bdevs_operational": 4, 00:15:01.554 "process": { 00:15:01.554 "type": "rebuild", 00:15:01.554 "target": "spare", 00:15:01.554 "progress": { 00:15:01.554 "blocks": 176640, 00:15:01.554 "percent": 89 00:15:01.554 } 00:15:01.554 }, 00:15:01.554 "base_bdevs_list": [ 00:15:01.554 { 00:15:01.554 "name": "spare", 00:15:01.554 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:15:01.554 "is_configured": true, 00:15:01.554 "data_offset": 0, 00:15:01.554 "data_size": 65536 00:15:01.554 }, 00:15:01.554 { 00:15:01.554 "name": "BaseBdev2", 00:15:01.554 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:15:01.554 "is_configured": true, 00:15:01.554 "data_offset": 0, 00:15:01.554 "data_size": 65536 00:15:01.554 }, 00:15:01.554 { 00:15:01.554 "name": "BaseBdev3", 00:15:01.554 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:15:01.554 "is_configured": true, 00:15:01.554 "data_offset": 0, 00:15:01.554 "data_size": 65536 00:15:01.554 }, 00:15:01.554 { 00:15:01.554 "name": "BaseBdev4", 00:15:01.554 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:15:01.554 "is_configured": true, 00:15:01.554 "data_offset": 0, 00:15:01.554 "data_size": 65536 00:15:01.554 } 00:15:01.554 ] 00:15:01.554 }' 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.554 23:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.497 [2024-11-18 23:10:21.618181] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:02.497 [2024-11-18 23:10:21.618263] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:02.497 [2024-11-18 23:10:21.618319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.497 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.497 "name": "raid_bdev1", 00:15:02.498 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:15:02.498 "strip_size_kb": 64, 00:15:02.498 "state": "online", 00:15:02.498 "raid_level": "raid5f", 00:15:02.498 "superblock": false, 00:15:02.498 "num_base_bdevs": 4, 00:15:02.498 "num_base_bdevs_discovered": 4, 00:15:02.498 "num_base_bdevs_operational": 4, 00:15:02.498 "base_bdevs_list": [ 00:15:02.498 { 00:15:02.498 "name": "spare", 00:15:02.498 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:15:02.498 "is_configured": true, 00:15:02.498 "data_offset": 0, 00:15:02.498 "data_size": 65536 00:15:02.498 }, 00:15:02.498 { 00:15:02.498 "name": "BaseBdev2", 00:15:02.498 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:15:02.498 "is_configured": true, 00:15:02.498 "data_offset": 0, 00:15:02.498 "data_size": 65536 00:15:02.498 }, 00:15:02.498 { 00:15:02.498 "name": "BaseBdev3", 00:15:02.498 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:15:02.498 "is_configured": true, 00:15:02.498 "data_offset": 0, 00:15:02.498 "data_size": 65536 00:15:02.498 }, 00:15:02.498 { 00:15:02.498 "name": "BaseBdev4", 00:15:02.498 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:15:02.498 "is_configured": true, 00:15:02.498 "data_offset": 0, 00:15:02.498 "data_size": 65536 00:15:02.498 } 00:15:02.498 ] 00:15:02.498 }' 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.498 "name": "raid_bdev1", 00:15:02.498 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:15:02.498 "strip_size_kb": 64, 00:15:02.498 "state": "online", 00:15:02.498 "raid_level": "raid5f", 00:15:02.498 "superblock": false, 00:15:02.498 "num_base_bdevs": 4, 00:15:02.498 "num_base_bdevs_discovered": 4, 00:15:02.498 "num_base_bdevs_operational": 4, 00:15:02.498 "base_bdevs_list": [ 00:15:02.498 { 00:15:02.498 "name": "spare", 00:15:02.498 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:15:02.498 "is_configured": true, 00:15:02.498 "data_offset": 0, 00:15:02.498 "data_size": 65536 00:15:02.498 }, 00:15:02.498 { 00:15:02.498 "name": "BaseBdev2", 00:15:02.498 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:15:02.498 "is_configured": true, 00:15:02.498 "data_offset": 0, 00:15:02.498 "data_size": 65536 00:15:02.498 }, 00:15:02.498 { 00:15:02.498 "name": "BaseBdev3", 00:15:02.498 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:15:02.498 "is_configured": true, 00:15:02.498 "data_offset": 0, 00:15:02.498 "data_size": 65536 00:15:02.498 }, 00:15:02.498 { 00:15:02.498 "name": "BaseBdev4", 00:15:02.498 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:15:02.498 "is_configured": true, 00:15:02.498 "data_offset": 0, 00:15:02.498 "data_size": 65536 00:15:02.498 } 00:15:02.498 ] 00:15:02.498 }' 00:15:02.498 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.757 23:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.757 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.757 "name": "raid_bdev1", 00:15:02.757 "uuid": "5e88cc4a-4b15-421f-975a-2217d7723e7c", 00:15:02.757 "strip_size_kb": 64, 00:15:02.757 "state": "online", 00:15:02.757 "raid_level": "raid5f", 00:15:02.757 "superblock": false, 00:15:02.757 "num_base_bdevs": 4, 00:15:02.757 "num_base_bdevs_discovered": 4, 00:15:02.757 "num_base_bdevs_operational": 4, 00:15:02.757 "base_bdevs_list": [ 00:15:02.757 { 00:15:02.757 "name": "spare", 00:15:02.757 "uuid": "3faf6bc2-0094-58c3-a23f-863b4010145e", 00:15:02.757 "is_configured": true, 00:15:02.757 "data_offset": 0, 00:15:02.757 "data_size": 65536 00:15:02.757 }, 00:15:02.757 { 00:15:02.757 "name": "BaseBdev2", 00:15:02.757 "uuid": "3e56727f-8ba5-5c4a-9bf7-f627591998ed", 00:15:02.757 "is_configured": true, 00:15:02.757 "data_offset": 0, 00:15:02.757 "data_size": 65536 00:15:02.757 }, 00:15:02.757 { 00:15:02.757 "name": "BaseBdev3", 00:15:02.757 "uuid": "909b1e6e-1864-5326-81f6-8ad9392d370e", 00:15:02.757 "is_configured": true, 00:15:02.757 "data_offset": 0, 00:15:02.757 "data_size": 65536 00:15:02.757 }, 00:15:02.757 { 00:15:02.757 "name": "BaseBdev4", 00:15:02.757 "uuid": "dca63876-f1e9-5c2b-9876-7f5854ef4444", 00:15:02.757 "is_configured": true, 00:15:02.757 "data_offset": 0, 00:15:02.757 "data_size": 65536 00:15:02.757 } 00:15:02.757 ] 00:15:02.758 }' 00:15:02.758 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.758 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.327 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:03.327 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.327 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.327 [2024-11-18 23:10:22.410225] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.327 [2024-11-18 23:10:22.410255] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.327 [2024-11-18 23:10:22.410356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.327 [2024-11-18 23:10:22.410474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.327 [2024-11-18 23:10:22.410494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:03.327 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.327 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.327 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.327 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:03.328 /dev/nbd0 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:03.328 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.595 1+0 records in 00:15:03.595 1+0 records out 00:15:03.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692141 s, 5.9 MB/s 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:03.595 /dev/nbd1 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.595 1+0 records in 00:15:03.595 1+0 records out 00:15:03.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387065 s, 10.6 MB/s 00:15:03.595 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.856 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:03.856 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.856 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:03.856 23:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:03.856 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.856 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:03.856 23:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:03.856 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:03.856 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.856 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:03.856 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.856 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:03.856 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.856 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:04.116 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:04.116 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:04.116 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.117 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94978 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 94978 ']' 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 94978 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94978 00:15:04.377 killing process with pid 94978 00:15:04.377 Received shutdown signal, test time was about 60.000000 seconds 00:15:04.377 00:15:04.377 Latency(us) 00:15:04.377 [2024-11-18T23:10:23.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.377 [2024-11-18T23:10:23.755Z] =================================================================================================================== 00:15:04.377 [2024-11-18T23:10:23.755Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94978' 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 94978 00:15:04.377 [2024-11-18 23:10:23.543072] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.377 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 94978 00:15:04.377 [2024-11-18 23:10:23.634353] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.638 23:10:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:04.638 00:15:04.638 real 0m17.831s 00:15:04.638 user 0m21.568s 00:15:04.638 sys 0m2.620s 00:15:04.638 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.638 23:10:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.638 ************************************ 00:15:04.638 END TEST raid5f_rebuild_test 00:15:04.638 ************************************ 00:15:04.898 23:10:24 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:04.898 23:10:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:04.898 23:10:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.898 23:10:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.898 ************************************ 00:15:04.898 START TEST raid5f_rebuild_test_sb 00:15:04.898 ************************************ 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95470 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95470 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95470 ']' 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.898 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.898 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:04.898 Zero copy mechanism will not be used. 00:15:04.898 [2024-11-18 23:10:24.164354] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:04.898 [2024-11-18 23:10:24.164509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95470 ] 00:15:05.158 [2024-11-18 23:10:24.329612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.158 [2024-11-18 23:10:24.377412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.158 [2024-11-18 23:10:24.420680] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.158 [2024-11-18 23:10:24.420713] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.728 BaseBdev1_malloc 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.728 [2024-11-18 23:10:24.995301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:05.728 [2024-11-18 23:10:24.995372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.728 [2024-11-18 23:10:24.995411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:05.728 [2024-11-18 23:10:24.995433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.728 [2024-11-18 23:10:24.997506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.728 [2024-11-18 23:10:24.997542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:05.728 BaseBdev1 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:05.728 23:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:05.728 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.728 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.728 BaseBdev2_malloc 00:15:05.728 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.728 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:05.728 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.728 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.728 [2024-11-18 23:10:25.038736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:05.728 [2024-11-18 23:10:25.038854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.728 [2024-11-18 23:10:25.038903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:05.728 [2024-11-18 23:10:25.038927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.728 [2024-11-18 23:10:25.042758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.729 [2024-11-18 23:10:25.042815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:05.729 BaseBdev2 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.729 BaseBdev3_malloc 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.729 [2024-11-18 23:10:25.068627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:05.729 [2024-11-18 23:10:25.068673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.729 [2024-11-18 23:10:25.068694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:05.729 [2024-11-18 23:10:25.068703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.729 [2024-11-18 23:10:25.070692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.729 [2024-11-18 23:10:25.070725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:05.729 BaseBdev3 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.729 BaseBdev4_malloc 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.729 [2024-11-18 23:10:25.097078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:05.729 [2024-11-18 23:10:25.097129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.729 [2024-11-18 23:10:25.097154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:05.729 [2024-11-18 23:10:25.097163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.729 [2024-11-18 23:10:25.099167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.729 [2024-11-18 23:10:25.099201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:05.729 BaseBdev4 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.729 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.989 spare_malloc 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.989 spare_delay 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.989 [2024-11-18 23:10:25.137635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:05.989 [2024-11-18 23:10:25.137682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.989 [2024-11-18 23:10:25.137720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:05.989 [2024-11-18 23:10:25.137728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.989 [2024-11-18 23:10:25.139777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.989 [2024-11-18 23:10:25.139813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:05.989 spare 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.989 [2024-11-18 23:10:25.149712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.989 [2024-11-18 23:10:25.151561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.989 [2024-11-18 23:10:25.151630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:05.989 [2024-11-18 23:10:25.151671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:05.989 [2024-11-18 23:10:25.151850] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:05.989 [2024-11-18 23:10:25.151871] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:05.989 [2024-11-18 23:10:25.152133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:05.989 [2024-11-18 23:10:25.152587] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:05.989 [2024-11-18 23:10:25.152610] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:05.989 [2024-11-18 23:10:25.152740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.989 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.989 "name": "raid_bdev1", 00:15:05.989 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:05.989 "strip_size_kb": 64, 00:15:05.989 "state": "online", 00:15:05.989 "raid_level": "raid5f", 00:15:05.989 "superblock": true, 00:15:05.989 "num_base_bdevs": 4, 00:15:05.989 "num_base_bdevs_discovered": 4, 00:15:05.989 "num_base_bdevs_operational": 4, 00:15:05.990 "base_bdevs_list": [ 00:15:05.990 { 00:15:05.990 "name": "BaseBdev1", 00:15:05.990 "uuid": "14d6b8e6-d52d-5d45-9b2d-dc208f8da0d0", 00:15:05.990 "is_configured": true, 00:15:05.990 "data_offset": 2048, 00:15:05.990 "data_size": 63488 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "name": "BaseBdev2", 00:15:05.990 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:05.990 "is_configured": true, 00:15:05.990 "data_offset": 2048, 00:15:05.990 "data_size": 63488 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "name": "BaseBdev3", 00:15:05.990 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:05.990 "is_configured": true, 00:15:05.990 "data_offset": 2048, 00:15:05.990 "data_size": 63488 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "name": "BaseBdev4", 00:15:05.990 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:05.990 "is_configured": true, 00:15:05.990 "data_offset": 2048, 00:15:05.990 "data_size": 63488 00:15:05.990 } 00:15:05.990 ] 00:15:05.990 }' 00:15:05.990 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.990 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.250 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:06.250 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.250 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.250 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:06.250 [2024-11-18 23:10:25.589909] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.250 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.509 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:06.509 [2024-11-18 23:10:25.857317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:06.509 /dev/nbd0 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.770 1+0 records in 00:15:06.770 1+0 records out 00:15:06.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346104 s, 11.8 MB/s 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:06.770 23:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:07.030 496+0 records in 00:15:07.030 496+0 records out 00:15:07.030 97517568 bytes (98 MB, 93 MiB) copied, 0.385321 s, 253 MB/s 00:15:07.030 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:07.030 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.030 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:07.030 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.031 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:07.031 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.031 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:07.290 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:07.290 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:07.290 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:07.290 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.290 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.290 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:07.291 [2024-11-18 23:10:26.543140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.291 [2024-11-18 23:10:26.559190] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.291 "name": "raid_bdev1", 00:15:07.291 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:07.291 "strip_size_kb": 64, 00:15:07.291 "state": "online", 00:15:07.291 "raid_level": "raid5f", 00:15:07.291 "superblock": true, 00:15:07.291 "num_base_bdevs": 4, 00:15:07.291 "num_base_bdevs_discovered": 3, 00:15:07.291 "num_base_bdevs_operational": 3, 00:15:07.291 "base_bdevs_list": [ 00:15:07.291 { 00:15:07.291 "name": null, 00:15:07.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.291 "is_configured": false, 00:15:07.291 "data_offset": 0, 00:15:07.291 "data_size": 63488 00:15:07.291 }, 00:15:07.291 { 00:15:07.291 "name": "BaseBdev2", 00:15:07.291 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:07.291 "is_configured": true, 00:15:07.291 "data_offset": 2048, 00:15:07.291 "data_size": 63488 00:15:07.291 }, 00:15:07.291 { 00:15:07.291 "name": "BaseBdev3", 00:15:07.291 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:07.291 "is_configured": true, 00:15:07.291 "data_offset": 2048, 00:15:07.291 "data_size": 63488 00:15:07.291 }, 00:15:07.291 { 00:15:07.291 "name": "BaseBdev4", 00:15:07.291 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:07.291 "is_configured": true, 00:15:07.291 "data_offset": 2048, 00:15:07.291 "data_size": 63488 00:15:07.291 } 00:15:07.291 ] 00:15:07.291 }' 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.291 23:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.860 23:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:07.860 23:10:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.860 23:10:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.860 [2024-11-18 23:10:27.018428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.860 [2024-11-18 23:10:27.021808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:15:07.860 [2024-11-18 23:10:27.023953] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:07.860 23:10:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.860 23:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.797 "name": "raid_bdev1", 00:15:08.797 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:08.797 "strip_size_kb": 64, 00:15:08.797 "state": "online", 00:15:08.797 "raid_level": "raid5f", 00:15:08.797 "superblock": true, 00:15:08.797 "num_base_bdevs": 4, 00:15:08.797 "num_base_bdevs_discovered": 4, 00:15:08.797 "num_base_bdevs_operational": 4, 00:15:08.797 "process": { 00:15:08.797 "type": "rebuild", 00:15:08.797 "target": "spare", 00:15:08.797 "progress": { 00:15:08.797 "blocks": 19200, 00:15:08.797 "percent": 10 00:15:08.797 } 00:15:08.797 }, 00:15:08.797 "base_bdevs_list": [ 00:15:08.797 { 00:15:08.797 "name": "spare", 00:15:08.797 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:08.797 "is_configured": true, 00:15:08.797 "data_offset": 2048, 00:15:08.797 "data_size": 63488 00:15:08.797 }, 00:15:08.797 { 00:15:08.797 "name": "BaseBdev2", 00:15:08.797 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:08.797 "is_configured": true, 00:15:08.797 "data_offset": 2048, 00:15:08.797 "data_size": 63488 00:15:08.797 }, 00:15:08.797 { 00:15:08.797 "name": "BaseBdev3", 00:15:08.797 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:08.797 "is_configured": true, 00:15:08.797 "data_offset": 2048, 00:15:08.797 "data_size": 63488 00:15:08.797 }, 00:15:08.797 { 00:15:08.797 "name": "BaseBdev4", 00:15:08.797 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:08.797 "is_configured": true, 00:15:08.797 "data_offset": 2048, 00:15:08.797 "data_size": 63488 00:15:08.797 } 00:15:08.797 ] 00:15:08.797 }' 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.797 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.056 [2024-11-18 23:10:28.186898] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.056 [2024-11-18 23:10:28.229132] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:09.056 [2024-11-18 23:10:28.229193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.056 [2024-11-18 23:10:28.229229] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.056 [2024-11-18 23:10:28.229242] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.056 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.057 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.057 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.057 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.057 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.057 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.057 "name": "raid_bdev1", 00:15:09.057 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:09.057 "strip_size_kb": 64, 00:15:09.057 "state": "online", 00:15:09.057 "raid_level": "raid5f", 00:15:09.057 "superblock": true, 00:15:09.057 "num_base_bdevs": 4, 00:15:09.057 "num_base_bdevs_discovered": 3, 00:15:09.057 "num_base_bdevs_operational": 3, 00:15:09.057 "base_bdevs_list": [ 00:15:09.057 { 00:15:09.057 "name": null, 00:15:09.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.057 "is_configured": false, 00:15:09.057 "data_offset": 0, 00:15:09.057 "data_size": 63488 00:15:09.057 }, 00:15:09.057 { 00:15:09.057 "name": "BaseBdev2", 00:15:09.057 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:09.057 "is_configured": true, 00:15:09.057 "data_offset": 2048, 00:15:09.057 "data_size": 63488 00:15:09.057 }, 00:15:09.057 { 00:15:09.057 "name": "BaseBdev3", 00:15:09.057 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:09.057 "is_configured": true, 00:15:09.057 "data_offset": 2048, 00:15:09.057 "data_size": 63488 00:15:09.057 }, 00:15:09.057 { 00:15:09.057 "name": "BaseBdev4", 00:15:09.057 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:09.057 "is_configured": true, 00:15:09.057 "data_offset": 2048, 00:15:09.057 "data_size": 63488 00:15:09.057 } 00:15:09.057 ] 00:15:09.057 }' 00:15:09.057 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.057 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.625 "name": "raid_bdev1", 00:15:09.625 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:09.625 "strip_size_kb": 64, 00:15:09.625 "state": "online", 00:15:09.625 "raid_level": "raid5f", 00:15:09.625 "superblock": true, 00:15:09.625 "num_base_bdevs": 4, 00:15:09.625 "num_base_bdevs_discovered": 3, 00:15:09.625 "num_base_bdevs_operational": 3, 00:15:09.625 "base_bdevs_list": [ 00:15:09.625 { 00:15:09.625 "name": null, 00:15:09.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.625 "is_configured": false, 00:15:09.625 "data_offset": 0, 00:15:09.625 "data_size": 63488 00:15:09.625 }, 00:15:09.625 { 00:15:09.625 "name": "BaseBdev2", 00:15:09.625 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:09.625 "is_configured": true, 00:15:09.625 "data_offset": 2048, 00:15:09.625 "data_size": 63488 00:15:09.625 }, 00:15:09.625 { 00:15:09.625 "name": "BaseBdev3", 00:15:09.625 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:09.625 "is_configured": true, 00:15:09.625 "data_offset": 2048, 00:15:09.625 "data_size": 63488 00:15:09.625 }, 00:15:09.625 { 00:15:09.625 "name": "BaseBdev4", 00:15:09.625 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:09.625 "is_configured": true, 00:15:09.625 "data_offset": 2048, 00:15:09.625 "data_size": 63488 00:15:09.625 } 00:15:09.625 ] 00:15:09.625 }' 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.625 [2024-11-18 23:10:28.873452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.625 [2024-11-18 23:10:28.876629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:15:09.625 [2024-11-18 23:10:28.878827] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.625 23:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:10.564 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.564 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.564 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.564 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.564 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.564 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.564 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.564 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.564 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.564 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.564 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.564 "name": "raid_bdev1", 00:15:10.564 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:10.564 "strip_size_kb": 64, 00:15:10.564 "state": "online", 00:15:10.564 "raid_level": "raid5f", 00:15:10.564 "superblock": true, 00:15:10.564 "num_base_bdevs": 4, 00:15:10.564 "num_base_bdevs_discovered": 4, 00:15:10.564 "num_base_bdevs_operational": 4, 00:15:10.564 "process": { 00:15:10.564 "type": "rebuild", 00:15:10.564 "target": "spare", 00:15:10.564 "progress": { 00:15:10.564 "blocks": 19200, 00:15:10.564 "percent": 10 00:15:10.564 } 00:15:10.564 }, 00:15:10.564 "base_bdevs_list": [ 00:15:10.564 { 00:15:10.564 "name": "spare", 00:15:10.564 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:10.564 "is_configured": true, 00:15:10.564 "data_offset": 2048, 00:15:10.564 "data_size": 63488 00:15:10.564 }, 00:15:10.564 { 00:15:10.564 "name": "BaseBdev2", 00:15:10.564 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:10.564 "is_configured": true, 00:15:10.564 "data_offset": 2048, 00:15:10.564 "data_size": 63488 00:15:10.564 }, 00:15:10.564 { 00:15:10.564 "name": "BaseBdev3", 00:15:10.564 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:10.564 "is_configured": true, 00:15:10.564 "data_offset": 2048, 00:15:10.564 "data_size": 63488 00:15:10.564 }, 00:15:10.564 { 00:15:10.564 "name": "BaseBdev4", 00:15:10.564 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:10.564 "is_configured": true, 00:15:10.564 "data_offset": 2048, 00:15:10.564 "data_size": 63488 00:15:10.564 } 00:15:10.564 ] 00:15:10.564 }' 00:15:10.824 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.824 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.824 23:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:10.824 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=526 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.824 "name": "raid_bdev1", 00:15:10.824 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:10.824 "strip_size_kb": 64, 00:15:10.824 "state": "online", 00:15:10.824 "raid_level": "raid5f", 00:15:10.824 "superblock": true, 00:15:10.824 "num_base_bdevs": 4, 00:15:10.824 "num_base_bdevs_discovered": 4, 00:15:10.824 "num_base_bdevs_operational": 4, 00:15:10.824 "process": { 00:15:10.824 "type": "rebuild", 00:15:10.824 "target": "spare", 00:15:10.824 "progress": { 00:15:10.824 "blocks": 21120, 00:15:10.824 "percent": 11 00:15:10.824 } 00:15:10.824 }, 00:15:10.824 "base_bdevs_list": [ 00:15:10.824 { 00:15:10.824 "name": "spare", 00:15:10.824 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:10.824 "is_configured": true, 00:15:10.824 "data_offset": 2048, 00:15:10.824 "data_size": 63488 00:15:10.824 }, 00:15:10.824 { 00:15:10.824 "name": "BaseBdev2", 00:15:10.824 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:10.824 "is_configured": true, 00:15:10.824 "data_offset": 2048, 00:15:10.824 "data_size": 63488 00:15:10.824 }, 00:15:10.824 { 00:15:10.824 "name": "BaseBdev3", 00:15:10.824 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:10.824 "is_configured": true, 00:15:10.824 "data_offset": 2048, 00:15:10.824 "data_size": 63488 00:15:10.824 }, 00:15:10.824 { 00:15:10.824 "name": "BaseBdev4", 00:15:10.824 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:10.824 "is_configured": true, 00:15:10.824 "data_offset": 2048, 00:15:10.824 "data_size": 63488 00:15:10.824 } 00:15:10.824 ] 00:15:10.824 }' 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.824 23:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.211 "name": "raid_bdev1", 00:15:12.211 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:12.211 "strip_size_kb": 64, 00:15:12.211 "state": "online", 00:15:12.211 "raid_level": "raid5f", 00:15:12.211 "superblock": true, 00:15:12.211 "num_base_bdevs": 4, 00:15:12.211 "num_base_bdevs_discovered": 4, 00:15:12.211 "num_base_bdevs_operational": 4, 00:15:12.211 "process": { 00:15:12.211 "type": "rebuild", 00:15:12.211 "target": "spare", 00:15:12.211 "progress": { 00:15:12.211 "blocks": 44160, 00:15:12.211 "percent": 23 00:15:12.211 } 00:15:12.211 }, 00:15:12.211 "base_bdevs_list": [ 00:15:12.211 { 00:15:12.211 "name": "spare", 00:15:12.211 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:12.211 "is_configured": true, 00:15:12.211 "data_offset": 2048, 00:15:12.211 "data_size": 63488 00:15:12.211 }, 00:15:12.211 { 00:15:12.211 "name": "BaseBdev2", 00:15:12.211 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:12.211 "is_configured": true, 00:15:12.211 "data_offset": 2048, 00:15:12.211 "data_size": 63488 00:15:12.211 }, 00:15:12.211 { 00:15:12.211 "name": "BaseBdev3", 00:15:12.211 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:12.211 "is_configured": true, 00:15:12.211 "data_offset": 2048, 00:15:12.211 "data_size": 63488 00:15:12.211 }, 00:15:12.211 { 00:15:12.211 "name": "BaseBdev4", 00:15:12.211 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:12.211 "is_configured": true, 00:15:12.211 "data_offset": 2048, 00:15:12.211 "data_size": 63488 00:15:12.211 } 00:15:12.211 ] 00:15:12.211 }' 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.211 23:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.150 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.150 "name": "raid_bdev1", 00:15:13.151 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:13.151 "strip_size_kb": 64, 00:15:13.151 "state": "online", 00:15:13.151 "raid_level": "raid5f", 00:15:13.151 "superblock": true, 00:15:13.151 "num_base_bdevs": 4, 00:15:13.151 "num_base_bdevs_discovered": 4, 00:15:13.151 "num_base_bdevs_operational": 4, 00:15:13.151 "process": { 00:15:13.151 "type": "rebuild", 00:15:13.151 "target": "spare", 00:15:13.151 "progress": { 00:15:13.151 "blocks": 65280, 00:15:13.151 "percent": 34 00:15:13.151 } 00:15:13.151 }, 00:15:13.151 "base_bdevs_list": [ 00:15:13.151 { 00:15:13.151 "name": "spare", 00:15:13.151 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:13.151 "is_configured": true, 00:15:13.151 "data_offset": 2048, 00:15:13.151 "data_size": 63488 00:15:13.151 }, 00:15:13.151 { 00:15:13.151 "name": "BaseBdev2", 00:15:13.151 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:13.151 "is_configured": true, 00:15:13.151 "data_offset": 2048, 00:15:13.151 "data_size": 63488 00:15:13.151 }, 00:15:13.151 { 00:15:13.151 "name": "BaseBdev3", 00:15:13.151 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:13.151 "is_configured": true, 00:15:13.151 "data_offset": 2048, 00:15:13.151 "data_size": 63488 00:15:13.151 }, 00:15:13.151 { 00:15:13.151 "name": "BaseBdev4", 00:15:13.151 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:13.151 "is_configured": true, 00:15:13.151 "data_offset": 2048, 00:15:13.151 "data_size": 63488 00:15:13.151 } 00:15:13.151 ] 00:15:13.151 }' 00:15:13.151 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.151 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.151 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.151 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.151 23:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.091 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.091 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.091 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.091 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.091 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.091 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.091 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.091 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.091 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.091 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.352 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.352 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.352 "name": "raid_bdev1", 00:15:14.352 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:14.352 "strip_size_kb": 64, 00:15:14.352 "state": "online", 00:15:14.352 "raid_level": "raid5f", 00:15:14.352 "superblock": true, 00:15:14.352 "num_base_bdevs": 4, 00:15:14.352 "num_base_bdevs_discovered": 4, 00:15:14.352 "num_base_bdevs_operational": 4, 00:15:14.352 "process": { 00:15:14.352 "type": "rebuild", 00:15:14.352 "target": "spare", 00:15:14.352 "progress": { 00:15:14.352 "blocks": 86400, 00:15:14.352 "percent": 45 00:15:14.352 } 00:15:14.352 }, 00:15:14.352 "base_bdevs_list": [ 00:15:14.352 { 00:15:14.352 "name": "spare", 00:15:14.352 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:14.352 "is_configured": true, 00:15:14.352 "data_offset": 2048, 00:15:14.352 "data_size": 63488 00:15:14.352 }, 00:15:14.352 { 00:15:14.352 "name": "BaseBdev2", 00:15:14.352 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:14.352 "is_configured": true, 00:15:14.352 "data_offset": 2048, 00:15:14.352 "data_size": 63488 00:15:14.352 }, 00:15:14.352 { 00:15:14.352 "name": "BaseBdev3", 00:15:14.352 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:14.352 "is_configured": true, 00:15:14.352 "data_offset": 2048, 00:15:14.352 "data_size": 63488 00:15:14.352 }, 00:15:14.352 { 00:15:14.352 "name": "BaseBdev4", 00:15:14.352 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:14.352 "is_configured": true, 00:15:14.352 "data_offset": 2048, 00:15:14.352 "data_size": 63488 00:15:14.352 } 00:15:14.352 ] 00:15:14.352 }' 00:15:14.352 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.352 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.352 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.352 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.352 23:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.292 "name": "raid_bdev1", 00:15:15.292 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:15.292 "strip_size_kb": 64, 00:15:15.292 "state": "online", 00:15:15.292 "raid_level": "raid5f", 00:15:15.292 "superblock": true, 00:15:15.292 "num_base_bdevs": 4, 00:15:15.292 "num_base_bdevs_discovered": 4, 00:15:15.292 "num_base_bdevs_operational": 4, 00:15:15.292 "process": { 00:15:15.292 "type": "rebuild", 00:15:15.292 "target": "spare", 00:15:15.292 "progress": { 00:15:15.292 "blocks": 109440, 00:15:15.292 "percent": 57 00:15:15.292 } 00:15:15.292 }, 00:15:15.292 "base_bdevs_list": [ 00:15:15.292 { 00:15:15.292 "name": "spare", 00:15:15.292 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:15.292 "is_configured": true, 00:15:15.292 "data_offset": 2048, 00:15:15.292 "data_size": 63488 00:15:15.292 }, 00:15:15.292 { 00:15:15.292 "name": "BaseBdev2", 00:15:15.292 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:15.292 "is_configured": true, 00:15:15.292 "data_offset": 2048, 00:15:15.292 "data_size": 63488 00:15:15.292 }, 00:15:15.292 { 00:15:15.292 "name": "BaseBdev3", 00:15:15.292 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:15.292 "is_configured": true, 00:15:15.292 "data_offset": 2048, 00:15:15.292 "data_size": 63488 00:15:15.292 }, 00:15:15.292 { 00:15:15.292 "name": "BaseBdev4", 00:15:15.292 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:15.292 "is_configured": true, 00:15:15.292 "data_offset": 2048, 00:15:15.292 "data_size": 63488 00:15:15.292 } 00:15:15.292 ] 00:15:15.292 }' 00:15:15.292 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.552 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.552 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.552 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.552 23:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.491 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.491 "name": "raid_bdev1", 00:15:16.491 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:16.491 "strip_size_kb": 64, 00:15:16.491 "state": "online", 00:15:16.491 "raid_level": "raid5f", 00:15:16.491 "superblock": true, 00:15:16.492 "num_base_bdevs": 4, 00:15:16.492 "num_base_bdevs_discovered": 4, 00:15:16.492 "num_base_bdevs_operational": 4, 00:15:16.492 "process": { 00:15:16.492 "type": "rebuild", 00:15:16.492 "target": "spare", 00:15:16.492 "progress": { 00:15:16.492 "blocks": 130560, 00:15:16.492 "percent": 68 00:15:16.492 } 00:15:16.492 }, 00:15:16.492 "base_bdevs_list": [ 00:15:16.492 { 00:15:16.492 "name": "spare", 00:15:16.492 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:16.492 "is_configured": true, 00:15:16.492 "data_offset": 2048, 00:15:16.492 "data_size": 63488 00:15:16.492 }, 00:15:16.492 { 00:15:16.492 "name": "BaseBdev2", 00:15:16.492 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:16.492 "is_configured": true, 00:15:16.492 "data_offset": 2048, 00:15:16.492 "data_size": 63488 00:15:16.492 }, 00:15:16.492 { 00:15:16.492 "name": "BaseBdev3", 00:15:16.492 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:16.492 "is_configured": true, 00:15:16.492 "data_offset": 2048, 00:15:16.492 "data_size": 63488 00:15:16.492 }, 00:15:16.492 { 00:15:16.492 "name": "BaseBdev4", 00:15:16.492 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:16.492 "is_configured": true, 00:15:16.492 "data_offset": 2048, 00:15:16.492 "data_size": 63488 00:15:16.492 } 00:15:16.492 ] 00:15:16.492 }' 00:15:16.492 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.492 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.492 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.752 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.752 23:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.700 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.700 "name": "raid_bdev1", 00:15:17.700 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:17.700 "strip_size_kb": 64, 00:15:17.700 "state": "online", 00:15:17.700 "raid_level": "raid5f", 00:15:17.700 "superblock": true, 00:15:17.700 "num_base_bdevs": 4, 00:15:17.700 "num_base_bdevs_discovered": 4, 00:15:17.700 "num_base_bdevs_operational": 4, 00:15:17.700 "process": { 00:15:17.700 "type": "rebuild", 00:15:17.700 "target": "spare", 00:15:17.700 "progress": { 00:15:17.701 "blocks": 153600, 00:15:17.701 "percent": 80 00:15:17.701 } 00:15:17.701 }, 00:15:17.701 "base_bdevs_list": [ 00:15:17.701 { 00:15:17.701 "name": "spare", 00:15:17.701 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:17.701 "is_configured": true, 00:15:17.701 "data_offset": 2048, 00:15:17.701 "data_size": 63488 00:15:17.701 }, 00:15:17.701 { 00:15:17.701 "name": "BaseBdev2", 00:15:17.701 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:17.701 "is_configured": true, 00:15:17.701 "data_offset": 2048, 00:15:17.701 "data_size": 63488 00:15:17.701 }, 00:15:17.701 { 00:15:17.701 "name": "BaseBdev3", 00:15:17.701 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:17.701 "is_configured": true, 00:15:17.701 "data_offset": 2048, 00:15:17.701 "data_size": 63488 00:15:17.701 }, 00:15:17.701 { 00:15:17.701 "name": "BaseBdev4", 00:15:17.701 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:17.701 "is_configured": true, 00:15:17.701 "data_offset": 2048, 00:15:17.701 "data_size": 63488 00:15:17.701 } 00:15:17.701 ] 00:15:17.701 }' 00:15:17.701 23:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.701 23:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.701 23:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.701 23:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.701 23:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.083 "name": "raid_bdev1", 00:15:19.083 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:19.083 "strip_size_kb": 64, 00:15:19.083 "state": "online", 00:15:19.083 "raid_level": "raid5f", 00:15:19.083 "superblock": true, 00:15:19.083 "num_base_bdevs": 4, 00:15:19.083 "num_base_bdevs_discovered": 4, 00:15:19.083 "num_base_bdevs_operational": 4, 00:15:19.083 "process": { 00:15:19.083 "type": "rebuild", 00:15:19.083 "target": "spare", 00:15:19.083 "progress": { 00:15:19.083 "blocks": 174720, 00:15:19.083 "percent": 91 00:15:19.083 } 00:15:19.083 }, 00:15:19.083 "base_bdevs_list": [ 00:15:19.083 { 00:15:19.083 "name": "spare", 00:15:19.083 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:19.083 "is_configured": true, 00:15:19.083 "data_offset": 2048, 00:15:19.083 "data_size": 63488 00:15:19.083 }, 00:15:19.083 { 00:15:19.083 "name": "BaseBdev2", 00:15:19.083 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:19.083 "is_configured": true, 00:15:19.083 "data_offset": 2048, 00:15:19.083 "data_size": 63488 00:15:19.083 }, 00:15:19.083 { 00:15:19.083 "name": "BaseBdev3", 00:15:19.083 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:19.083 "is_configured": true, 00:15:19.083 "data_offset": 2048, 00:15:19.083 "data_size": 63488 00:15:19.083 }, 00:15:19.083 { 00:15:19.083 "name": "BaseBdev4", 00:15:19.083 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:19.083 "is_configured": true, 00:15:19.083 "data_offset": 2048, 00:15:19.083 "data_size": 63488 00:15:19.083 } 00:15:19.083 ] 00:15:19.083 }' 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.083 23:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.654 [2024-11-18 23:10:38.916748] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:19.654 [2024-11-18 23:10:38.916813] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:19.654 [2024-11-18 23:10:38.916930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.920 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.921 "name": "raid_bdev1", 00:15:19.921 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:19.921 "strip_size_kb": 64, 00:15:19.921 "state": "online", 00:15:19.921 "raid_level": "raid5f", 00:15:19.921 "superblock": true, 00:15:19.921 "num_base_bdevs": 4, 00:15:19.921 "num_base_bdevs_discovered": 4, 00:15:19.921 "num_base_bdevs_operational": 4, 00:15:19.921 "base_bdevs_list": [ 00:15:19.921 { 00:15:19.921 "name": "spare", 00:15:19.921 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:19.921 "is_configured": true, 00:15:19.921 "data_offset": 2048, 00:15:19.921 "data_size": 63488 00:15:19.921 }, 00:15:19.921 { 00:15:19.921 "name": "BaseBdev2", 00:15:19.921 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:19.921 "is_configured": true, 00:15:19.921 "data_offset": 2048, 00:15:19.921 "data_size": 63488 00:15:19.921 }, 00:15:19.921 { 00:15:19.921 "name": "BaseBdev3", 00:15:19.921 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:19.921 "is_configured": true, 00:15:19.921 "data_offset": 2048, 00:15:19.921 "data_size": 63488 00:15:19.921 }, 00:15:19.921 { 00:15:19.921 "name": "BaseBdev4", 00:15:19.921 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:19.921 "is_configured": true, 00:15:19.921 "data_offset": 2048, 00:15:19.921 "data_size": 63488 00:15:19.921 } 00:15:19.921 ] 00:15:19.921 }' 00:15:19.921 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.921 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:19.921 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.180 "name": "raid_bdev1", 00:15:20.180 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:20.180 "strip_size_kb": 64, 00:15:20.180 "state": "online", 00:15:20.180 "raid_level": "raid5f", 00:15:20.180 "superblock": true, 00:15:20.180 "num_base_bdevs": 4, 00:15:20.180 "num_base_bdevs_discovered": 4, 00:15:20.180 "num_base_bdevs_operational": 4, 00:15:20.180 "base_bdevs_list": [ 00:15:20.180 { 00:15:20.180 "name": "spare", 00:15:20.180 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:20.180 "is_configured": true, 00:15:20.180 "data_offset": 2048, 00:15:20.180 "data_size": 63488 00:15:20.180 }, 00:15:20.180 { 00:15:20.180 "name": "BaseBdev2", 00:15:20.180 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:20.180 "is_configured": true, 00:15:20.180 "data_offset": 2048, 00:15:20.180 "data_size": 63488 00:15:20.180 }, 00:15:20.180 { 00:15:20.180 "name": "BaseBdev3", 00:15:20.180 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:20.180 "is_configured": true, 00:15:20.180 "data_offset": 2048, 00:15:20.180 "data_size": 63488 00:15:20.180 }, 00:15:20.180 { 00:15:20.180 "name": "BaseBdev4", 00:15:20.180 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:20.180 "is_configured": true, 00:15:20.180 "data_offset": 2048, 00:15:20.180 "data_size": 63488 00:15:20.180 } 00:15:20.180 ] 00:15:20.180 }' 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.180 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.180 "name": "raid_bdev1", 00:15:20.180 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:20.180 "strip_size_kb": 64, 00:15:20.180 "state": "online", 00:15:20.180 "raid_level": "raid5f", 00:15:20.180 "superblock": true, 00:15:20.180 "num_base_bdevs": 4, 00:15:20.181 "num_base_bdevs_discovered": 4, 00:15:20.181 "num_base_bdevs_operational": 4, 00:15:20.181 "base_bdevs_list": [ 00:15:20.181 { 00:15:20.181 "name": "spare", 00:15:20.181 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:20.181 "is_configured": true, 00:15:20.181 "data_offset": 2048, 00:15:20.181 "data_size": 63488 00:15:20.181 }, 00:15:20.181 { 00:15:20.181 "name": "BaseBdev2", 00:15:20.181 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:20.181 "is_configured": true, 00:15:20.181 "data_offset": 2048, 00:15:20.181 "data_size": 63488 00:15:20.181 }, 00:15:20.181 { 00:15:20.181 "name": "BaseBdev3", 00:15:20.181 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:20.181 "is_configured": true, 00:15:20.181 "data_offset": 2048, 00:15:20.181 "data_size": 63488 00:15:20.181 }, 00:15:20.181 { 00:15:20.181 "name": "BaseBdev4", 00:15:20.181 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:20.181 "is_configured": true, 00:15:20.181 "data_offset": 2048, 00:15:20.181 "data_size": 63488 00:15:20.181 } 00:15:20.181 ] 00:15:20.181 }' 00:15:20.181 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.181 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.751 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.751 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.751 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.751 [2024-11-18 23:10:39.960049] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.751 [2024-11-18 23:10:39.960084] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.751 [2024-11-18 23:10:39.960155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.751 [2024-11-18 23:10:39.960235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.751 [2024-11-18 23:10:39.960249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:20.751 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.751 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.751 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.751 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:20.751 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.751 23:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:20.751 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:21.011 /dev/nbd0 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.011 1+0 records in 00:15:21.011 1+0 records out 00:15:21.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602429 s, 6.8 MB/s 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:21.011 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:21.272 /dev/nbd1 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.272 1+0 records in 00:15:21.272 1+0 records out 00:15:21.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315795 s, 13.0 MB/s 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.272 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:21.532 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:21.532 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:21.532 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:21.532 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.532 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.532 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:21.532 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:21.532 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.532 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.532 23:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:21.791 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:21.791 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:21.791 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:21.791 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.791 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.792 [2024-11-18 23:10:41.062703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:21.792 [2024-11-18 23:10:41.062757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.792 [2024-11-18 23:10:41.062777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:21.792 [2024-11-18 23:10:41.062787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.792 [2024-11-18 23:10:41.064976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.792 [2024-11-18 23:10:41.065059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:21.792 [2024-11-18 23:10:41.065202] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:21.792 [2024-11-18 23:10:41.065291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.792 [2024-11-18 23:10:41.065467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.792 [2024-11-18 23:10:41.065617] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:21.792 [2024-11-18 23:10:41.065737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:21.792 spare 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.792 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.792 [2024-11-18 23:10:41.165685] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:21.792 [2024-11-18 23:10:41.165708] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:21.792 [2024-11-18 23:10:41.165974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:15:21.792 [2024-11-18 23:10:41.166430] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:21.792 [2024-11-18 23:10:41.166445] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:21.792 [2024-11-18 23:10:41.166576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.051 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.052 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.052 "name": "raid_bdev1", 00:15:22.052 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:22.052 "strip_size_kb": 64, 00:15:22.052 "state": "online", 00:15:22.052 "raid_level": "raid5f", 00:15:22.052 "superblock": true, 00:15:22.052 "num_base_bdevs": 4, 00:15:22.052 "num_base_bdevs_discovered": 4, 00:15:22.052 "num_base_bdevs_operational": 4, 00:15:22.052 "base_bdevs_list": [ 00:15:22.052 { 00:15:22.052 "name": "spare", 00:15:22.052 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:22.052 "is_configured": true, 00:15:22.052 "data_offset": 2048, 00:15:22.052 "data_size": 63488 00:15:22.052 }, 00:15:22.052 { 00:15:22.052 "name": "BaseBdev2", 00:15:22.052 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:22.052 "is_configured": true, 00:15:22.052 "data_offset": 2048, 00:15:22.052 "data_size": 63488 00:15:22.052 }, 00:15:22.052 { 00:15:22.052 "name": "BaseBdev3", 00:15:22.052 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:22.052 "is_configured": true, 00:15:22.052 "data_offset": 2048, 00:15:22.052 "data_size": 63488 00:15:22.052 }, 00:15:22.052 { 00:15:22.052 "name": "BaseBdev4", 00:15:22.052 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:22.052 "is_configured": true, 00:15:22.052 "data_offset": 2048, 00:15:22.052 "data_size": 63488 00:15:22.052 } 00:15:22.052 ] 00:15:22.052 }' 00:15:22.052 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.052 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.311 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.311 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.311 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.311 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.311 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.311 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.311 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.311 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.311 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.311 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.311 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.311 "name": "raid_bdev1", 00:15:22.311 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:22.311 "strip_size_kb": 64, 00:15:22.311 "state": "online", 00:15:22.311 "raid_level": "raid5f", 00:15:22.311 "superblock": true, 00:15:22.311 "num_base_bdevs": 4, 00:15:22.311 "num_base_bdevs_discovered": 4, 00:15:22.311 "num_base_bdevs_operational": 4, 00:15:22.311 "base_bdevs_list": [ 00:15:22.311 { 00:15:22.311 "name": "spare", 00:15:22.311 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:22.311 "is_configured": true, 00:15:22.311 "data_offset": 2048, 00:15:22.311 "data_size": 63488 00:15:22.311 }, 00:15:22.311 { 00:15:22.311 "name": "BaseBdev2", 00:15:22.311 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:22.311 "is_configured": true, 00:15:22.311 "data_offset": 2048, 00:15:22.311 "data_size": 63488 00:15:22.311 }, 00:15:22.311 { 00:15:22.311 "name": "BaseBdev3", 00:15:22.311 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:22.311 "is_configured": true, 00:15:22.311 "data_offset": 2048, 00:15:22.311 "data_size": 63488 00:15:22.311 }, 00:15:22.311 { 00:15:22.312 "name": "BaseBdev4", 00:15:22.312 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:22.312 "is_configured": true, 00:15:22.312 "data_offset": 2048, 00:15:22.312 "data_size": 63488 00:15:22.312 } 00:15:22.312 ] 00:15:22.312 }' 00:15:22.312 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.312 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.312 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.571 [2024-11-18 23:10:41.789492] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.571 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.571 "name": "raid_bdev1", 00:15:22.571 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:22.571 "strip_size_kb": 64, 00:15:22.571 "state": "online", 00:15:22.571 "raid_level": "raid5f", 00:15:22.571 "superblock": true, 00:15:22.571 "num_base_bdevs": 4, 00:15:22.571 "num_base_bdevs_discovered": 3, 00:15:22.571 "num_base_bdevs_operational": 3, 00:15:22.572 "base_bdevs_list": [ 00:15:22.572 { 00:15:22.572 "name": null, 00:15:22.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.572 "is_configured": false, 00:15:22.572 "data_offset": 0, 00:15:22.572 "data_size": 63488 00:15:22.572 }, 00:15:22.572 { 00:15:22.572 "name": "BaseBdev2", 00:15:22.572 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:22.572 "is_configured": true, 00:15:22.572 "data_offset": 2048, 00:15:22.572 "data_size": 63488 00:15:22.572 }, 00:15:22.572 { 00:15:22.572 "name": "BaseBdev3", 00:15:22.572 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:22.572 "is_configured": true, 00:15:22.572 "data_offset": 2048, 00:15:22.572 "data_size": 63488 00:15:22.572 }, 00:15:22.572 { 00:15:22.572 "name": "BaseBdev4", 00:15:22.572 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:22.572 "is_configured": true, 00:15:22.572 "data_offset": 2048, 00:15:22.572 "data_size": 63488 00:15:22.572 } 00:15:22.572 ] 00:15:22.572 }' 00:15:22.572 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.572 23:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.141 23:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:23.141 23:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.141 23:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.141 [2024-11-18 23:10:42.228760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.141 [2024-11-18 23:10:42.228986] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:23.141 [2024-11-18 23:10:42.229055] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:23.141 [2024-11-18 23:10:42.229143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.141 [2024-11-18 23:10:42.232196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:15:23.141 [2024-11-18 23:10:42.234351] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.141 23:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.141 23:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.081 "name": "raid_bdev1", 00:15:24.081 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:24.081 "strip_size_kb": 64, 00:15:24.081 "state": "online", 00:15:24.081 "raid_level": "raid5f", 00:15:24.081 "superblock": true, 00:15:24.081 "num_base_bdevs": 4, 00:15:24.081 "num_base_bdevs_discovered": 4, 00:15:24.081 "num_base_bdevs_operational": 4, 00:15:24.081 "process": { 00:15:24.081 "type": "rebuild", 00:15:24.081 "target": "spare", 00:15:24.081 "progress": { 00:15:24.081 "blocks": 19200, 00:15:24.081 "percent": 10 00:15:24.081 } 00:15:24.081 }, 00:15:24.081 "base_bdevs_list": [ 00:15:24.081 { 00:15:24.081 "name": "spare", 00:15:24.081 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:24.081 "is_configured": true, 00:15:24.081 "data_offset": 2048, 00:15:24.081 "data_size": 63488 00:15:24.081 }, 00:15:24.081 { 00:15:24.081 "name": "BaseBdev2", 00:15:24.081 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:24.081 "is_configured": true, 00:15:24.081 "data_offset": 2048, 00:15:24.081 "data_size": 63488 00:15:24.081 }, 00:15:24.081 { 00:15:24.081 "name": "BaseBdev3", 00:15:24.081 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:24.081 "is_configured": true, 00:15:24.081 "data_offset": 2048, 00:15:24.081 "data_size": 63488 00:15:24.081 }, 00:15:24.081 { 00:15:24.081 "name": "BaseBdev4", 00:15:24.081 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:24.081 "is_configured": true, 00:15:24.081 "data_offset": 2048, 00:15:24.081 "data_size": 63488 00:15:24.081 } 00:15:24.081 ] 00:15:24.081 }' 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.081 [2024-11-18 23:10:43.404918] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.081 [2024-11-18 23:10:43.439351] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:24.081 [2024-11-18 23:10:43.439473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.081 [2024-11-18 23:10:43.439496] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.081 [2024-11-18 23:10:43.439503] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.081 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.340 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.340 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.340 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.340 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.340 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.340 "name": "raid_bdev1", 00:15:24.340 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:24.340 "strip_size_kb": 64, 00:15:24.340 "state": "online", 00:15:24.340 "raid_level": "raid5f", 00:15:24.340 "superblock": true, 00:15:24.340 "num_base_bdevs": 4, 00:15:24.340 "num_base_bdevs_discovered": 3, 00:15:24.340 "num_base_bdevs_operational": 3, 00:15:24.340 "base_bdevs_list": [ 00:15:24.340 { 00:15:24.340 "name": null, 00:15:24.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.341 "is_configured": false, 00:15:24.341 "data_offset": 0, 00:15:24.341 "data_size": 63488 00:15:24.341 }, 00:15:24.341 { 00:15:24.341 "name": "BaseBdev2", 00:15:24.341 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:24.341 "is_configured": true, 00:15:24.341 "data_offset": 2048, 00:15:24.341 "data_size": 63488 00:15:24.341 }, 00:15:24.341 { 00:15:24.341 "name": "BaseBdev3", 00:15:24.341 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:24.341 "is_configured": true, 00:15:24.341 "data_offset": 2048, 00:15:24.341 "data_size": 63488 00:15:24.341 }, 00:15:24.341 { 00:15:24.341 "name": "BaseBdev4", 00:15:24.341 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:24.341 "is_configured": true, 00:15:24.341 "data_offset": 2048, 00:15:24.341 "data_size": 63488 00:15:24.341 } 00:15:24.341 ] 00:15:24.341 }' 00:15:24.341 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.341 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.600 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:24.600 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.600 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.600 [2024-11-18 23:10:43.939424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:24.600 [2024-11-18 23:10:43.939521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.600 [2024-11-18 23:10:43.939593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:24.600 [2024-11-18 23:10:43.939631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.600 [2024-11-18 23:10:43.940089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.600 [2024-11-18 23:10:43.940146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:24.600 [2024-11-18 23:10:43.940271] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:24.600 [2024-11-18 23:10:43.940326] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:24.600 [2024-11-18 23:10:43.940394] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:24.600 [2024-11-18 23:10:43.940453] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.600 [2024-11-18 23:10:43.943489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:15:24.600 spare 00:15:24.600 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.600 23:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:24.600 [2024-11-18 23:10:43.945738] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.980 23:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.980 23:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.980 23:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.980 23:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.980 23:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.980 23:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.980 23:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.980 23:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.980 23:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.980 23:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.980 "name": "raid_bdev1", 00:15:25.980 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:25.980 "strip_size_kb": 64, 00:15:25.980 "state": "online", 00:15:25.980 "raid_level": "raid5f", 00:15:25.980 "superblock": true, 00:15:25.980 "num_base_bdevs": 4, 00:15:25.980 "num_base_bdevs_discovered": 4, 00:15:25.980 "num_base_bdevs_operational": 4, 00:15:25.980 "process": { 00:15:25.980 "type": "rebuild", 00:15:25.980 "target": "spare", 00:15:25.980 "progress": { 00:15:25.980 "blocks": 19200, 00:15:25.980 "percent": 10 00:15:25.980 } 00:15:25.980 }, 00:15:25.980 "base_bdevs_list": [ 00:15:25.980 { 00:15:25.980 "name": "spare", 00:15:25.980 "uuid": "3de2160a-9cac-552c-ad4e-399ea07dd560", 00:15:25.980 "is_configured": true, 00:15:25.980 "data_offset": 2048, 00:15:25.980 "data_size": 63488 00:15:25.980 }, 00:15:25.980 { 00:15:25.980 "name": "BaseBdev2", 00:15:25.980 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:25.980 "is_configured": true, 00:15:25.980 "data_offset": 2048, 00:15:25.980 "data_size": 63488 00:15:25.980 }, 00:15:25.980 { 00:15:25.980 "name": "BaseBdev3", 00:15:25.980 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:25.980 "is_configured": true, 00:15:25.980 "data_offset": 2048, 00:15:25.980 "data_size": 63488 00:15:25.980 }, 00:15:25.980 { 00:15:25.980 "name": "BaseBdev4", 00:15:25.980 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:25.980 "is_configured": true, 00:15:25.980 "data_offset": 2048, 00:15:25.980 "data_size": 63488 00:15:25.980 } 00:15:25.980 ] 00:15:25.980 }' 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.980 [2024-11-18 23:10:45.110325] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.980 [2024-11-18 23:10:45.150817] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.980 [2024-11-18 23:10:45.150869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.980 [2024-11-18 23:10:45.150884] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.980 [2024-11-18 23:10:45.150891] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.980 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.981 "name": "raid_bdev1", 00:15:25.981 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:25.981 "strip_size_kb": 64, 00:15:25.981 "state": "online", 00:15:25.981 "raid_level": "raid5f", 00:15:25.981 "superblock": true, 00:15:25.981 "num_base_bdevs": 4, 00:15:25.981 "num_base_bdevs_discovered": 3, 00:15:25.981 "num_base_bdevs_operational": 3, 00:15:25.981 "base_bdevs_list": [ 00:15:25.981 { 00:15:25.981 "name": null, 00:15:25.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.981 "is_configured": false, 00:15:25.981 "data_offset": 0, 00:15:25.981 "data_size": 63488 00:15:25.981 }, 00:15:25.981 { 00:15:25.981 "name": "BaseBdev2", 00:15:25.981 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:25.981 "is_configured": true, 00:15:25.981 "data_offset": 2048, 00:15:25.981 "data_size": 63488 00:15:25.981 }, 00:15:25.981 { 00:15:25.981 "name": "BaseBdev3", 00:15:25.981 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:25.981 "is_configured": true, 00:15:25.981 "data_offset": 2048, 00:15:25.981 "data_size": 63488 00:15:25.981 }, 00:15:25.981 { 00:15:25.981 "name": "BaseBdev4", 00:15:25.981 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:25.981 "is_configured": true, 00:15:25.981 "data_offset": 2048, 00:15:25.981 "data_size": 63488 00:15:25.981 } 00:15:25.981 ] 00:15:25.981 }' 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.981 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.240 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.240 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.240 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.240 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.240 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.240 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.240 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.240 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.240 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.240 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.499 "name": "raid_bdev1", 00:15:26.499 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:26.499 "strip_size_kb": 64, 00:15:26.499 "state": "online", 00:15:26.499 "raid_level": "raid5f", 00:15:26.499 "superblock": true, 00:15:26.499 "num_base_bdevs": 4, 00:15:26.499 "num_base_bdevs_discovered": 3, 00:15:26.499 "num_base_bdevs_operational": 3, 00:15:26.499 "base_bdevs_list": [ 00:15:26.499 { 00:15:26.499 "name": null, 00:15:26.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.499 "is_configured": false, 00:15:26.499 "data_offset": 0, 00:15:26.499 "data_size": 63488 00:15:26.499 }, 00:15:26.499 { 00:15:26.499 "name": "BaseBdev2", 00:15:26.499 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:26.499 "is_configured": true, 00:15:26.499 "data_offset": 2048, 00:15:26.499 "data_size": 63488 00:15:26.499 }, 00:15:26.499 { 00:15:26.499 "name": "BaseBdev3", 00:15:26.499 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:26.499 "is_configured": true, 00:15:26.499 "data_offset": 2048, 00:15:26.499 "data_size": 63488 00:15:26.499 }, 00:15:26.499 { 00:15:26.499 "name": "BaseBdev4", 00:15:26.499 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:26.499 "is_configured": true, 00:15:26.499 "data_offset": 2048, 00:15:26.499 "data_size": 63488 00:15:26.499 } 00:15:26.499 ] 00:15:26.499 }' 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.499 [2024-11-18 23:10:45.750647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:26.499 [2024-11-18 23:10:45.750752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.499 [2024-11-18 23:10:45.750792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:26.499 [2024-11-18 23:10:45.750803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.499 [2024-11-18 23:10:45.751222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.499 [2024-11-18 23:10:45.751242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:26.499 [2024-11-18 23:10:45.751308] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:26.499 [2024-11-18 23:10:45.751351] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:26.499 [2024-11-18 23:10:45.751358] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:26.499 [2024-11-18 23:10:45.751378] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:26.499 BaseBdev1 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.499 23:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.438 "name": "raid_bdev1", 00:15:27.438 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:27.438 "strip_size_kb": 64, 00:15:27.438 "state": "online", 00:15:27.438 "raid_level": "raid5f", 00:15:27.438 "superblock": true, 00:15:27.438 "num_base_bdevs": 4, 00:15:27.438 "num_base_bdevs_discovered": 3, 00:15:27.438 "num_base_bdevs_operational": 3, 00:15:27.438 "base_bdevs_list": [ 00:15:27.438 { 00:15:27.438 "name": null, 00:15:27.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.438 "is_configured": false, 00:15:27.438 "data_offset": 0, 00:15:27.438 "data_size": 63488 00:15:27.438 }, 00:15:27.438 { 00:15:27.438 "name": "BaseBdev2", 00:15:27.438 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:27.438 "is_configured": true, 00:15:27.438 "data_offset": 2048, 00:15:27.438 "data_size": 63488 00:15:27.438 }, 00:15:27.438 { 00:15:27.438 "name": "BaseBdev3", 00:15:27.438 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:27.438 "is_configured": true, 00:15:27.438 "data_offset": 2048, 00:15:27.438 "data_size": 63488 00:15:27.438 }, 00:15:27.438 { 00:15:27.438 "name": "BaseBdev4", 00:15:27.438 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:27.438 "is_configured": true, 00:15:27.438 "data_offset": 2048, 00:15:27.438 "data_size": 63488 00:15:27.438 } 00:15:27.438 ] 00:15:27.438 }' 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.438 23:10:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.007 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.007 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.007 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.007 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.007 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.008 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.008 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.008 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.008 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.008 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.008 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.008 "name": "raid_bdev1", 00:15:28.008 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:28.008 "strip_size_kb": 64, 00:15:28.008 "state": "online", 00:15:28.008 "raid_level": "raid5f", 00:15:28.008 "superblock": true, 00:15:28.008 "num_base_bdevs": 4, 00:15:28.008 "num_base_bdevs_discovered": 3, 00:15:28.008 "num_base_bdevs_operational": 3, 00:15:28.008 "base_bdevs_list": [ 00:15:28.008 { 00:15:28.008 "name": null, 00:15:28.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.008 "is_configured": false, 00:15:28.008 "data_offset": 0, 00:15:28.008 "data_size": 63488 00:15:28.008 }, 00:15:28.008 { 00:15:28.008 "name": "BaseBdev2", 00:15:28.008 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:28.009 "is_configured": true, 00:15:28.009 "data_offset": 2048, 00:15:28.009 "data_size": 63488 00:15:28.009 }, 00:15:28.009 { 00:15:28.009 "name": "BaseBdev3", 00:15:28.009 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:28.009 "is_configured": true, 00:15:28.009 "data_offset": 2048, 00:15:28.009 "data_size": 63488 00:15:28.009 }, 00:15:28.009 { 00:15:28.009 "name": "BaseBdev4", 00:15:28.009 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:28.009 "is_configured": true, 00:15:28.009 "data_offset": 2048, 00:15:28.009 "data_size": 63488 00:15:28.009 } 00:15:28.009 ] 00:15:28.010 }' 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.010 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.010 [2024-11-18 23:10:47.328088] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.010 [2024-11-18 23:10:47.328271] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:28.010 [2024-11-18 23:10:47.328343] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:28.010 request: 00:15:28.010 { 00:15:28.010 "base_bdev": "BaseBdev1", 00:15:28.011 "raid_bdev": "raid_bdev1", 00:15:28.011 "method": "bdev_raid_add_base_bdev", 00:15:28.011 "req_id": 1 00:15:28.011 } 00:15:28.011 Got JSON-RPC error response 00:15:28.011 response: 00:15:28.011 { 00:15:28.011 "code": -22, 00:15:28.011 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:28.011 } 00:15:28.011 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:28.011 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:28.011 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:28.011 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:28.011 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:28.011 23:10:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.391 "name": "raid_bdev1", 00:15:29.391 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:29.391 "strip_size_kb": 64, 00:15:29.391 "state": "online", 00:15:29.391 "raid_level": "raid5f", 00:15:29.391 "superblock": true, 00:15:29.391 "num_base_bdevs": 4, 00:15:29.391 "num_base_bdevs_discovered": 3, 00:15:29.391 "num_base_bdevs_operational": 3, 00:15:29.391 "base_bdevs_list": [ 00:15:29.391 { 00:15:29.391 "name": null, 00:15:29.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.391 "is_configured": false, 00:15:29.391 "data_offset": 0, 00:15:29.391 "data_size": 63488 00:15:29.391 }, 00:15:29.391 { 00:15:29.391 "name": "BaseBdev2", 00:15:29.391 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:29.391 "is_configured": true, 00:15:29.391 "data_offset": 2048, 00:15:29.391 "data_size": 63488 00:15:29.391 }, 00:15:29.391 { 00:15:29.391 "name": "BaseBdev3", 00:15:29.391 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:29.391 "is_configured": true, 00:15:29.391 "data_offset": 2048, 00:15:29.391 "data_size": 63488 00:15:29.391 }, 00:15:29.391 { 00:15:29.391 "name": "BaseBdev4", 00:15:29.391 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:29.391 "is_configured": true, 00:15:29.391 "data_offset": 2048, 00:15:29.391 "data_size": 63488 00:15:29.391 } 00:15:29.391 ] 00:15:29.391 }' 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.391 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.652 "name": "raid_bdev1", 00:15:29.652 "uuid": "75badf95-5fb7-410d-8e91-6a9bd0347c90", 00:15:29.652 "strip_size_kb": 64, 00:15:29.652 "state": "online", 00:15:29.652 "raid_level": "raid5f", 00:15:29.652 "superblock": true, 00:15:29.652 "num_base_bdevs": 4, 00:15:29.652 "num_base_bdevs_discovered": 3, 00:15:29.652 "num_base_bdevs_operational": 3, 00:15:29.652 "base_bdevs_list": [ 00:15:29.652 { 00:15:29.652 "name": null, 00:15:29.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.652 "is_configured": false, 00:15:29.652 "data_offset": 0, 00:15:29.652 "data_size": 63488 00:15:29.652 }, 00:15:29.652 { 00:15:29.652 "name": "BaseBdev2", 00:15:29.652 "uuid": "8ed9860d-d931-590a-b7e3-e1dd5c6a4a43", 00:15:29.652 "is_configured": true, 00:15:29.652 "data_offset": 2048, 00:15:29.652 "data_size": 63488 00:15:29.652 }, 00:15:29.652 { 00:15:29.652 "name": "BaseBdev3", 00:15:29.652 "uuid": "6d50ed8c-eeee-5ef3-b700-2c83f8abf73a", 00:15:29.652 "is_configured": true, 00:15:29.652 "data_offset": 2048, 00:15:29.652 "data_size": 63488 00:15:29.652 }, 00:15:29.652 { 00:15:29.652 "name": "BaseBdev4", 00:15:29.652 "uuid": "3ea4edad-c81a-5a92-90f6-af16656a5c7c", 00:15:29.652 "is_configured": true, 00:15:29.652 "data_offset": 2048, 00:15:29.652 "data_size": 63488 00:15:29.652 } 00:15:29.652 ] 00:15:29.652 }' 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95470 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95470 ']' 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95470 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.652 23:10:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95470 00:15:29.652 23:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.652 23:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.652 23:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95470' 00:15:29.652 killing process with pid 95470 00:15:29.652 Received shutdown signal, test time was about 60.000000 seconds 00:15:29.652 00:15:29.652 Latency(us) 00:15:29.652 [2024-11-18T23:10:49.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.652 [2024-11-18T23:10:49.030Z] =================================================================================================================== 00:15:29.652 [2024-11-18T23:10:49.030Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:29.652 23:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95470 00:15:29.652 [2024-11-18 23:10:49.008991] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.652 [2024-11-18 23:10:49.009099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.652 [2024-11-18 23:10:49.009174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.652 [2024-11-18 23:10:49.009183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:29.652 23:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95470 00:15:29.912 [2024-11-18 23:10:49.058921] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:29.912 23:10:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:29.912 00:15:29.912 real 0m25.229s 00:15:29.912 user 0m32.086s 00:15:29.912 sys 0m3.142s 00:15:29.912 23:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.912 ************************************ 00:15:29.912 END TEST raid5f_rebuild_test_sb 00:15:29.912 ************************************ 00:15:29.912 23:10:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.173 23:10:49 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:30.173 23:10:49 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:30.173 23:10:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:30.173 23:10:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.173 23:10:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.173 ************************************ 00:15:30.173 START TEST raid_state_function_test_sb_4k 00:15:30.173 ************************************ 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96262 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96262' 00:15:30.173 Process raid pid: 96262 00:15:30.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96262 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96262 ']' 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.173 23:10:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.173 [2024-11-18 23:10:49.475874] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:30.173 [2024-11-18 23:10:49.476044] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.433 [2024-11-18 23:10:49.644912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.433 [2024-11-18 23:10:49.692626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.433 [2024-11-18 23:10:49.735687] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.433 [2024-11-18 23:10:49.735720] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.002 [2024-11-18 23:10:50.301171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.002 [2024-11-18 23:10:50.301216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.002 [2024-11-18 23:10:50.301228] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.002 [2024-11-18 23:10:50.301237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.002 "name": "Existed_Raid", 00:15:31.002 "uuid": "680f8d94-a64c-4980-9c09-d4f99f93f6e7", 00:15:31.002 "strip_size_kb": 0, 00:15:31.002 "state": "configuring", 00:15:31.002 "raid_level": "raid1", 00:15:31.002 "superblock": true, 00:15:31.002 "num_base_bdevs": 2, 00:15:31.002 "num_base_bdevs_discovered": 0, 00:15:31.002 "num_base_bdevs_operational": 2, 00:15:31.002 "base_bdevs_list": [ 00:15:31.002 { 00:15:31.002 "name": "BaseBdev1", 00:15:31.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.002 "is_configured": false, 00:15:31.002 "data_offset": 0, 00:15:31.002 "data_size": 0 00:15:31.002 }, 00:15:31.002 { 00:15:31.002 "name": "BaseBdev2", 00:15:31.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.002 "is_configured": false, 00:15:31.002 "data_offset": 0, 00:15:31.002 "data_size": 0 00:15:31.002 } 00:15:31.002 ] 00:15:31.002 }' 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.002 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.575 [2024-11-18 23:10:50.724392] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.575 [2024-11-18 23:10:50.724492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.575 [2024-11-18 23:10:50.736398] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.575 [2024-11-18 23:10:50.736475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.575 [2024-11-18 23:10:50.736516] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.575 [2024-11-18 23:10:50.736554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.575 [2024-11-18 23:10:50.757273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.575 BaseBdev1 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.575 [ 00:15:31.575 { 00:15:31.575 "name": "BaseBdev1", 00:15:31.575 "aliases": [ 00:15:31.575 "2fe5ef73-7e86-48c5-aad8-078c0a8f806e" 00:15:31.575 ], 00:15:31.575 "product_name": "Malloc disk", 00:15:31.575 "block_size": 4096, 00:15:31.575 "num_blocks": 8192, 00:15:31.575 "uuid": "2fe5ef73-7e86-48c5-aad8-078c0a8f806e", 00:15:31.575 "assigned_rate_limits": { 00:15:31.575 "rw_ios_per_sec": 0, 00:15:31.575 "rw_mbytes_per_sec": 0, 00:15:31.575 "r_mbytes_per_sec": 0, 00:15:31.575 "w_mbytes_per_sec": 0 00:15:31.575 }, 00:15:31.575 "claimed": true, 00:15:31.575 "claim_type": "exclusive_write", 00:15:31.575 "zoned": false, 00:15:31.575 "supported_io_types": { 00:15:31.575 "read": true, 00:15:31.575 "write": true, 00:15:31.575 "unmap": true, 00:15:31.575 "flush": true, 00:15:31.575 "reset": true, 00:15:31.575 "nvme_admin": false, 00:15:31.575 "nvme_io": false, 00:15:31.575 "nvme_io_md": false, 00:15:31.575 "write_zeroes": true, 00:15:31.575 "zcopy": true, 00:15:31.575 "get_zone_info": false, 00:15:31.575 "zone_management": false, 00:15:31.575 "zone_append": false, 00:15:31.575 "compare": false, 00:15:31.575 "compare_and_write": false, 00:15:31.575 "abort": true, 00:15:31.575 "seek_hole": false, 00:15:31.575 "seek_data": false, 00:15:31.575 "copy": true, 00:15:31.575 "nvme_iov_md": false 00:15:31.575 }, 00:15:31.575 "memory_domains": [ 00:15:31.575 { 00:15:31.575 "dma_device_id": "system", 00:15:31.575 "dma_device_type": 1 00:15:31.575 }, 00:15:31.575 { 00:15:31.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.575 "dma_device_type": 2 00:15:31.575 } 00:15:31.575 ], 00:15:31.575 "driver_specific": {} 00:15:31.575 } 00:15:31.575 ] 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.575 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.575 "name": "Existed_Raid", 00:15:31.576 "uuid": "cf93a417-1b56-4f27-bac6-306aea7b6dc0", 00:15:31.576 "strip_size_kb": 0, 00:15:31.576 "state": "configuring", 00:15:31.576 "raid_level": "raid1", 00:15:31.576 "superblock": true, 00:15:31.576 "num_base_bdevs": 2, 00:15:31.576 "num_base_bdevs_discovered": 1, 00:15:31.576 "num_base_bdevs_operational": 2, 00:15:31.576 "base_bdevs_list": [ 00:15:31.576 { 00:15:31.576 "name": "BaseBdev1", 00:15:31.576 "uuid": "2fe5ef73-7e86-48c5-aad8-078c0a8f806e", 00:15:31.576 "is_configured": true, 00:15:31.576 "data_offset": 256, 00:15:31.576 "data_size": 7936 00:15:31.576 }, 00:15:31.576 { 00:15:31.576 "name": "BaseBdev2", 00:15:31.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.576 "is_configured": false, 00:15:31.576 "data_offset": 0, 00:15:31.576 "data_size": 0 00:15:31.576 } 00:15:31.576 ] 00:15:31.576 }' 00:15:31.576 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.576 23:10:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.179 [2024-11-18 23:10:51.236463] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.179 [2024-11-18 23:10:51.236505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.179 [2024-11-18 23:10:51.248480] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.179 [2024-11-18 23:10:51.250278] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.179 [2024-11-18 23:10:51.250343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.179 "name": "Existed_Raid", 00:15:32.179 "uuid": "5a4c33c2-9b81-4f21-9154-2cb88ede4901", 00:15:32.179 "strip_size_kb": 0, 00:15:32.179 "state": "configuring", 00:15:32.179 "raid_level": "raid1", 00:15:32.179 "superblock": true, 00:15:32.179 "num_base_bdevs": 2, 00:15:32.179 "num_base_bdevs_discovered": 1, 00:15:32.179 "num_base_bdevs_operational": 2, 00:15:32.179 "base_bdevs_list": [ 00:15:32.179 { 00:15:32.179 "name": "BaseBdev1", 00:15:32.179 "uuid": "2fe5ef73-7e86-48c5-aad8-078c0a8f806e", 00:15:32.179 "is_configured": true, 00:15:32.179 "data_offset": 256, 00:15:32.179 "data_size": 7936 00:15:32.179 }, 00:15:32.179 { 00:15:32.179 "name": "BaseBdev2", 00:15:32.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.179 "is_configured": false, 00:15:32.179 "data_offset": 0, 00:15:32.179 "data_size": 0 00:15:32.179 } 00:15:32.179 ] 00:15:32.179 }' 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.179 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.439 [2024-11-18 23:10:51.753205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.439 [2024-11-18 23:10:51.753978] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:32.439 [2024-11-18 23:10:51.754151] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:32.439 BaseBdev2 00:15:32.439 [2024-11-18 23:10:51.755188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.439 [2024-11-18 23:10:51.755901] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:32.439 [2024-11-18 23:10:51.756105] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, ra 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:32.439 id_bdev 0x617000006980 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:32.439 [2024-11-18 23:10:51.756757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.439 [ 00:15:32.439 { 00:15:32.439 "name": "BaseBdev2", 00:15:32.439 "aliases": [ 00:15:32.439 "30c00a2e-d973-43e8-aba7-f437af9ddd4e" 00:15:32.439 ], 00:15:32.439 "product_name": "Malloc disk", 00:15:32.439 "block_size": 4096, 00:15:32.439 "num_blocks": 8192, 00:15:32.439 "uuid": "30c00a2e-d973-43e8-aba7-f437af9ddd4e", 00:15:32.439 "assigned_rate_limits": { 00:15:32.439 "rw_ios_per_sec": 0, 00:15:32.439 "rw_mbytes_per_sec": 0, 00:15:32.439 "r_mbytes_per_sec": 0, 00:15:32.439 "w_mbytes_per_sec": 0 00:15:32.439 }, 00:15:32.439 "claimed": true, 00:15:32.439 "claim_type": "exclusive_write", 00:15:32.439 "zoned": false, 00:15:32.439 "supported_io_types": { 00:15:32.439 "read": true, 00:15:32.439 "write": true, 00:15:32.439 "unmap": true, 00:15:32.439 "flush": true, 00:15:32.439 "reset": true, 00:15:32.439 "nvme_admin": false, 00:15:32.439 "nvme_io": false, 00:15:32.439 "nvme_io_md": false, 00:15:32.439 "write_zeroes": true, 00:15:32.439 "zcopy": true, 00:15:32.439 "get_zone_info": false, 00:15:32.439 "zone_management": false, 00:15:32.439 "zone_append": false, 00:15:32.439 "compare": false, 00:15:32.439 "compare_and_write": false, 00:15:32.439 "abort": true, 00:15:32.439 "seek_hole": false, 00:15:32.439 "seek_data": false, 00:15:32.439 "copy": true, 00:15:32.439 "nvme_iov_md": false 00:15:32.439 }, 00:15:32.439 "memory_domains": [ 00:15:32.439 { 00:15:32.439 "dma_device_id": "system", 00:15:32.439 "dma_device_type": 1 00:15:32.439 }, 00:15:32.439 { 00:15:32.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.439 "dma_device_type": 2 00:15:32.439 } 00:15:32.439 ], 00:15:32.439 "driver_specific": {} 00:15:32.439 } 00:15:32.439 ] 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.439 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.440 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.440 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.440 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.440 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.440 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.440 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.440 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.440 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.440 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.699 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.699 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.699 "name": "Existed_Raid", 00:15:32.699 "uuid": "5a4c33c2-9b81-4f21-9154-2cb88ede4901", 00:15:32.699 "strip_size_kb": 0, 00:15:32.699 "state": "online", 00:15:32.699 "raid_level": "raid1", 00:15:32.699 "superblock": true, 00:15:32.699 "num_base_bdevs": 2, 00:15:32.699 "num_base_bdevs_discovered": 2, 00:15:32.699 "num_base_bdevs_operational": 2, 00:15:32.699 "base_bdevs_list": [ 00:15:32.699 { 00:15:32.699 "name": "BaseBdev1", 00:15:32.699 "uuid": "2fe5ef73-7e86-48c5-aad8-078c0a8f806e", 00:15:32.699 "is_configured": true, 00:15:32.699 "data_offset": 256, 00:15:32.699 "data_size": 7936 00:15:32.699 }, 00:15:32.699 { 00:15:32.699 "name": "BaseBdev2", 00:15:32.699 "uuid": "30c00a2e-d973-43e8-aba7-f437af9ddd4e", 00:15:32.699 "is_configured": true, 00:15:32.699 "data_offset": 256, 00:15:32.699 "data_size": 7936 00:15:32.699 } 00:15:32.699 ] 00:15:32.699 }' 00:15:32.699 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.699 23:10:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.959 [2024-11-18 23:10:52.212633] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.959 "name": "Existed_Raid", 00:15:32.959 "aliases": [ 00:15:32.959 "5a4c33c2-9b81-4f21-9154-2cb88ede4901" 00:15:32.959 ], 00:15:32.959 "product_name": "Raid Volume", 00:15:32.959 "block_size": 4096, 00:15:32.959 "num_blocks": 7936, 00:15:32.959 "uuid": "5a4c33c2-9b81-4f21-9154-2cb88ede4901", 00:15:32.959 "assigned_rate_limits": { 00:15:32.959 "rw_ios_per_sec": 0, 00:15:32.959 "rw_mbytes_per_sec": 0, 00:15:32.959 "r_mbytes_per_sec": 0, 00:15:32.959 "w_mbytes_per_sec": 0 00:15:32.959 }, 00:15:32.959 "claimed": false, 00:15:32.959 "zoned": false, 00:15:32.959 "supported_io_types": { 00:15:32.959 "read": true, 00:15:32.959 "write": true, 00:15:32.959 "unmap": false, 00:15:32.959 "flush": false, 00:15:32.959 "reset": true, 00:15:32.959 "nvme_admin": false, 00:15:32.959 "nvme_io": false, 00:15:32.959 "nvme_io_md": false, 00:15:32.959 "write_zeroes": true, 00:15:32.959 "zcopy": false, 00:15:32.959 "get_zone_info": false, 00:15:32.959 "zone_management": false, 00:15:32.959 "zone_append": false, 00:15:32.959 "compare": false, 00:15:32.959 "compare_and_write": false, 00:15:32.959 "abort": false, 00:15:32.959 "seek_hole": false, 00:15:32.959 "seek_data": false, 00:15:32.959 "copy": false, 00:15:32.959 "nvme_iov_md": false 00:15:32.959 }, 00:15:32.959 "memory_domains": [ 00:15:32.959 { 00:15:32.959 "dma_device_id": "system", 00:15:32.959 "dma_device_type": 1 00:15:32.959 }, 00:15:32.959 { 00:15:32.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.959 "dma_device_type": 2 00:15:32.959 }, 00:15:32.959 { 00:15:32.959 "dma_device_id": "system", 00:15:32.959 "dma_device_type": 1 00:15:32.959 }, 00:15:32.959 { 00:15:32.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.959 "dma_device_type": 2 00:15:32.959 } 00:15:32.959 ], 00:15:32.959 "driver_specific": { 00:15:32.959 "raid": { 00:15:32.959 "uuid": "5a4c33c2-9b81-4f21-9154-2cb88ede4901", 00:15:32.959 "strip_size_kb": 0, 00:15:32.959 "state": "online", 00:15:32.959 "raid_level": "raid1", 00:15:32.959 "superblock": true, 00:15:32.959 "num_base_bdevs": 2, 00:15:32.959 "num_base_bdevs_discovered": 2, 00:15:32.959 "num_base_bdevs_operational": 2, 00:15:32.959 "base_bdevs_list": [ 00:15:32.959 { 00:15:32.959 "name": "BaseBdev1", 00:15:32.959 "uuid": "2fe5ef73-7e86-48c5-aad8-078c0a8f806e", 00:15:32.959 "is_configured": true, 00:15:32.959 "data_offset": 256, 00:15:32.959 "data_size": 7936 00:15:32.959 }, 00:15:32.959 { 00:15:32.959 "name": "BaseBdev2", 00:15:32.959 "uuid": "30c00a2e-d973-43e8-aba7-f437af9ddd4e", 00:15:32.959 "is_configured": true, 00:15:32.959 "data_offset": 256, 00:15:32.959 "data_size": 7936 00:15:32.959 } 00:15:32.959 ] 00:15:32.959 } 00:15:32.959 } 00:15:32.959 }' 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:32.959 BaseBdev2' 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.959 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.219 [2024-11-18 23:10:52.436036] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.219 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.220 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.220 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.220 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.220 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.220 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.220 "name": "Existed_Raid", 00:15:33.220 "uuid": "5a4c33c2-9b81-4f21-9154-2cb88ede4901", 00:15:33.220 "strip_size_kb": 0, 00:15:33.220 "state": "online", 00:15:33.220 "raid_level": "raid1", 00:15:33.220 "superblock": true, 00:15:33.220 "num_base_bdevs": 2, 00:15:33.220 "num_base_bdevs_discovered": 1, 00:15:33.220 "num_base_bdevs_operational": 1, 00:15:33.220 "base_bdevs_list": [ 00:15:33.220 { 00:15:33.220 "name": null, 00:15:33.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.220 "is_configured": false, 00:15:33.220 "data_offset": 0, 00:15:33.220 "data_size": 7936 00:15:33.220 }, 00:15:33.220 { 00:15:33.220 "name": "BaseBdev2", 00:15:33.220 "uuid": "30c00a2e-d973-43e8-aba7-f437af9ddd4e", 00:15:33.220 "is_configured": true, 00:15:33.220 "data_offset": 256, 00:15:33.220 "data_size": 7936 00:15:33.220 } 00:15:33.220 ] 00:15:33.220 }' 00:15:33.220 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.220 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.790 [2024-11-18 23:10:52.934413] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:33.790 [2024-11-18 23:10:52.934568] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.790 [2024-11-18 23:10:52.945902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.790 [2024-11-18 23:10:52.946017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.790 [2024-11-18 23:10:52.946100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96262 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96262 ']' 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96262 00:15:33.790 23:10:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:33.790 23:10:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.790 23:10:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96262 00:15:33.790 23:10:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.790 killing process with pid 96262 00:15:33.790 23:10:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.790 23:10:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96262' 00:15:33.790 23:10:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96262 00:15:33.790 [2024-11-18 23:10:53.041761] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.790 23:10:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96262 00:15:33.790 [2024-11-18 23:10:53.042727] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:34.051 23:10:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:34.051 00:15:34.051 real 0m3.921s 00:15:34.051 user 0m6.013s 00:15:34.051 sys 0m0.944s 00:15:34.051 ************************************ 00:15:34.051 END TEST raid_state_function_test_sb_4k 00:15:34.051 ************************************ 00:15:34.051 23:10:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.051 23:10:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.051 23:10:53 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:34.051 23:10:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:34.051 23:10:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.051 23:10:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:34.051 ************************************ 00:15:34.051 START TEST raid_superblock_test_4k 00:15:34.051 ************************************ 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96503 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96503 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96503 ']' 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.051 23:10:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.311 [2024-11-18 23:10:53.464334] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:34.311 [2024-11-18 23:10:53.464487] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96503 ] 00:15:34.311 [2024-11-18 23:10:53.629827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.311 [2024-11-18 23:10:53.675898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.571 [2024-11-18 23:10:53.718160] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.571 [2024-11-18 23:10:53.718225] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.145 malloc1 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.145 [2024-11-18 23:10:54.280370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.145 [2024-11-18 23:10:54.280492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.145 [2024-11-18 23:10:54.280570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:35.145 [2024-11-18 23:10:54.280625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.145 [2024-11-18 23:10:54.282712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.145 [2024-11-18 23:10:54.282785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.145 pt1 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.145 malloc2 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.145 [2024-11-18 23:10:54.322467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:35.145 [2024-11-18 23:10:54.322567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.145 [2024-11-18 23:10:54.322602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:35.145 [2024-11-18 23:10:54.322626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.145 [2024-11-18 23:10:54.327188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.145 [2024-11-18 23:10:54.327263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:35.145 pt2 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.145 [2024-11-18 23:10:54.335542] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.145 [2024-11-18 23:10:54.338316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:35.145 [2024-11-18 23:10:54.338512] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:35.145 [2024-11-18 23:10:54.338536] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:35.145 [2024-11-18 23:10:54.338904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:35.145 [2024-11-18 23:10:54.339113] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:35.145 [2024-11-18 23:10:54.339129] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:35.145 [2024-11-18 23:10:54.339348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.145 "name": "raid_bdev1", 00:15:35.145 "uuid": "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129", 00:15:35.145 "strip_size_kb": 0, 00:15:35.145 "state": "online", 00:15:35.145 "raid_level": "raid1", 00:15:35.145 "superblock": true, 00:15:35.145 "num_base_bdevs": 2, 00:15:35.145 "num_base_bdevs_discovered": 2, 00:15:35.145 "num_base_bdevs_operational": 2, 00:15:35.145 "base_bdevs_list": [ 00:15:35.145 { 00:15:35.145 "name": "pt1", 00:15:35.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.145 "is_configured": true, 00:15:35.145 "data_offset": 256, 00:15:35.145 "data_size": 7936 00:15:35.145 }, 00:15:35.145 { 00:15:35.145 "name": "pt2", 00:15:35.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.145 "is_configured": true, 00:15:35.145 "data_offset": 256, 00:15:35.145 "data_size": 7936 00:15:35.145 } 00:15:35.145 ] 00:15:35.145 }' 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.145 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.406 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:35.406 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:35.406 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:35.406 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:35.406 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:35.406 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.666 [2024-11-18 23:10:54.794902] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:35.666 "name": "raid_bdev1", 00:15:35.666 "aliases": [ 00:15:35.666 "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129" 00:15:35.666 ], 00:15:35.666 "product_name": "Raid Volume", 00:15:35.666 "block_size": 4096, 00:15:35.666 "num_blocks": 7936, 00:15:35.666 "uuid": "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129", 00:15:35.666 "assigned_rate_limits": { 00:15:35.666 "rw_ios_per_sec": 0, 00:15:35.666 "rw_mbytes_per_sec": 0, 00:15:35.666 "r_mbytes_per_sec": 0, 00:15:35.666 "w_mbytes_per_sec": 0 00:15:35.666 }, 00:15:35.666 "claimed": false, 00:15:35.666 "zoned": false, 00:15:35.666 "supported_io_types": { 00:15:35.666 "read": true, 00:15:35.666 "write": true, 00:15:35.666 "unmap": false, 00:15:35.666 "flush": false, 00:15:35.666 "reset": true, 00:15:35.666 "nvme_admin": false, 00:15:35.666 "nvme_io": false, 00:15:35.666 "nvme_io_md": false, 00:15:35.666 "write_zeroes": true, 00:15:35.666 "zcopy": false, 00:15:35.666 "get_zone_info": false, 00:15:35.666 "zone_management": false, 00:15:35.666 "zone_append": false, 00:15:35.666 "compare": false, 00:15:35.666 "compare_and_write": false, 00:15:35.666 "abort": false, 00:15:35.666 "seek_hole": false, 00:15:35.666 "seek_data": false, 00:15:35.666 "copy": false, 00:15:35.666 "nvme_iov_md": false 00:15:35.666 }, 00:15:35.666 "memory_domains": [ 00:15:35.666 { 00:15:35.666 "dma_device_id": "system", 00:15:35.666 "dma_device_type": 1 00:15:35.666 }, 00:15:35.666 { 00:15:35.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.666 "dma_device_type": 2 00:15:35.666 }, 00:15:35.666 { 00:15:35.666 "dma_device_id": "system", 00:15:35.666 "dma_device_type": 1 00:15:35.666 }, 00:15:35.666 { 00:15:35.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.666 "dma_device_type": 2 00:15:35.666 } 00:15:35.666 ], 00:15:35.666 "driver_specific": { 00:15:35.666 "raid": { 00:15:35.666 "uuid": "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129", 00:15:35.666 "strip_size_kb": 0, 00:15:35.666 "state": "online", 00:15:35.666 "raid_level": "raid1", 00:15:35.666 "superblock": true, 00:15:35.666 "num_base_bdevs": 2, 00:15:35.666 "num_base_bdevs_discovered": 2, 00:15:35.666 "num_base_bdevs_operational": 2, 00:15:35.666 "base_bdevs_list": [ 00:15:35.666 { 00:15:35.666 "name": "pt1", 00:15:35.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.666 "is_configured": true, 00:15:35.666 "data_offset": 256, 00:15:35.666 "data_size": 7936 00:15:35.666 }, 00:15:35.666 { 00:15:35.666 "name": "pt2", 00:15:35.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.666 "is_configured": true, 00:15:35.666 "data_offset": 256, 00:15:35.666 "data_size": 7936 00:15:35.666 } 00:15:35.666 ] 00:15:35.666 } 00:15:35.666 } 00:15:35.666 }' 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:35.666 pt2' 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:35.666 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:35.667 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.667 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:35.667 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.667 23:10:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.667 23:10:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.667 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.667 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:35.667 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:35.667 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:35.667 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:35.667 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.667 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.927 [2024-11-18 23:10:55.046414] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a2e2d3f4-b4a1-4ded-8446-7c7ef8736129 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z a2e2d3f4-b4a1-4ded-8446-7c7ef8736129 ']' 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.927 [2024-11-18 23:10:55.090116] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.927 [2024-11-18 23:10:55.090144] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.927 [2024-11-18 23:10:55.090204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.927 [2024-11-18 23:10:55.090264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.927 [2024-11-18 23:10:55.090273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.927 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.927 [2024-11-18 23:10:55.229903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:35.927 [2024-11-18 23:10:55.231675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:35.928 [2024-11-18 23:10:55.231740] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:35.928 [2024-11-18 23:10:55.231781] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:35.928 [2024-11-18 23:10:55.231796] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.928 [2024-11-18 23:10:55.231804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:35.928 request: 00:15:35.928 { 00:15:35.928 "name": "raid_bdev1", 00:15:35.928 "raid_level": "raid1", 00:15:35.928 "base_bdevs": [ 00:15:35.928 "malloc1", 00:15:35.928 "malloc2" 00:15:35.928 ], 00:15:35.928 "superblock": false, 00:15:35.928 "method": "bdev_raid_create", 00:15:35.928 "req_id": 1 00:15:35.928 } 00:15:35.928 Got JSON-RPC error response 00:15:35.928 response: 00:15:35.928 { 00:15:35.928 "code": -17, 00:15:35.928 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:35.928 } 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.928 [2024-11-18 23:10:55.293763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.928 [2024-11-18 23:10:55.293861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.928 [2024-11-18 23:10:55.293897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:35.928 [2024-11-18 23:10:55.293930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.928 [2024-11-18 23:10:55.295970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.928 [2024-11-18 23:10:55.296041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.928 [2024-11-18 23:10:55.296131] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:35.928 [2024-11-18 23:10:55.296206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.928 pt1 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.928 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.187 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.188 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.188 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.188 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.188 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.188 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.188 "name": "raid_bdev1", 00:15:36.188 "uuid": "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129", 00:15:36.188 "strip_size_kb": 0, 00:15:36.188 "state": "configuring", 00:15:36.188 "raid_level": "raid1", 00:15:36.188 "superblock": true, 00:15:36.188 "num_base_bdevs": 2, 00:15:36.188 "num_base_bdevs_discovered": 1, 00:15:36.188 "num_base_bdevs_operational": 2, 00:15:36.188 "base_bdevs_list": [ 00:15:36.188 { 00:15:36.188 "name": "pt1", 00:15:36.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.188 "is_configured": true, 00:15:36.188 "data_offset": 256, 00:15:36.188 "data_size": 7936 00:15:36.188 }, 00:15:36.188 { 00:15:36.188 "name": null, 00:15:36.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.188 "is_configured": false, 00:15:36.188 "data_offset": 256, 00:15:36.188 "data_size": 7936 00:15:36.188 } 00:15:36.188 ] 00:15:36.188 }' 00:15:36.188 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.188 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.460 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:36.460 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:36.460 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:36.460 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.460 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.460 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.460 [2024-11-18 23:10:55.780915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.460 [2024-11-18 23:10:55.780964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.460 [2024-11-18 23:10:55.781000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:36.460 [2024-11-18 23:10:55.781007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.460 [2024-11-18 23:10:55.781403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.460 [2024-11-18 23:10:55.781454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.460 [2024-11-18 23:10:55.781545] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:36.460 [2024-11-18 23:10:55.781597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.460 [2024-11-18 23:10:55.781712] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:36.460 [2024-11-18 23:10:55.781752] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:36.460 [2024-11-18 23:10:55.782001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:36.461 [2024-11-18 23:10:55.782154] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:36.461 [2024-11-18 23:10:55.782205] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:36.461 [2024-11-18 23:10:55.782371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.461 pt2 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.461 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.462 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.462 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.726 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.726 "name": "raid_bdev1", 00:15:36.726 "uuid": "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129", 00:15:36.726 "strip_size_kb": 0, 00:15:36.726 "state": "online", 00:15:36.726 "raid_level": "raid1", 00:15:36.726 "superblock": true, 00:15:36.726 "num_base_bdevs": 2, 00:15:36.726 "num_base_bdevs_discovered": 2, 00:15:36.726 "num_base_bdevs_operational": 2, 00:15:36.726 "base_bdevs_list": [ 00:15:36.726 { 00:15:36.726 "name": "pt1", 00:15:36.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.726 "is_configured": true, 00:15:36.726 "data_offset": 256, 00:15:36.726 "data_size": 7936 00:15:36.726 }, 00:15:36.726 { 00:15:36.726 "name": "pt2", 00:15:36.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.726 "is_configured": true, 00:15:36.726 "data_offset": 256, 00:15:36.726 "data_size": 7936 00:15:36.726 } 00:15:36.726 ] 00:15:36.726 }' 00:15:36.726 23:10:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.726 23:10:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.986 [2024-11-18 23:10:56.276275] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.986 "name": "raid_bdev1", 00:15:36.986 "aliases": [ 00:15:36.986 "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129" 00:15:36.986 ], 00:15:36.986 "product_name": "Raid Volume", 00:15:36.986 "block_size": 4096, 00:15:36.986 "num_blocks": 7936, 00:15:36.986 "uuid": "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129", 00:15:36.986 "assigned_rate_limits": { 00:15:36.986 "rw_ios_per_sec": 0, 00:15:36.986 "rw_mbytes_per_sec": 0, 00:15:36.986 "r_mbytes_per_sec": 0, 00:15:36.986 "w_mbytes_per_sec": 0 00:15:36.986 }, 00:15:36.986 "claimed": false, 00:15:36.986 "zoned": false, 00:15:36.986 "supported_io_types": { 00:15:36.986 "read": true, 00:15:36.986 "write": true, 00:15:36.986 "unmap": false, 00:15:36.986 "flush": false, 00:15:36.986 "reset": true, 00:15:36.986 "nvme_admin": false, 00:15:36.986 "nvme_io": false, 00:15:36.986 "nvme_io_md": false, 00:15:36.986 "write_zeroes": true, 00:15:36.986 "zcopy": false, 00:15:36.986 "get_zone_info": false, 00:15:36.986 "zone_management": false, 00:15:36.986 "zone_append": false, 00:15:36.986 "compare": false, 00:15:36.986 "compare_and_write": false, 00:15:36.986 "abort": false, 00:15:36.986 "seek_hole": false, 00:15:36.986 "seek_data": false, 00:15:36.986 "copy": false, 00:15:36.986 "nvme_iov_md": false 00:15:36.986 }, 00:15:36.986 "memory_domains": [ 00:15:36.986 { 00:15:36.986 "dma_device_id": "system", 00:15:36.986 "dma_device_type": 1 00:15:36.986 }, 00:15:36.986 { 00:15:36.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.986 "dma_device_type": 2 00:15:36.986 }, 00:15:36.986 { 00:15:36.986 "dma_device_id": "system", 00:15:36.986 "dma_device_type": 1 00:15:36.986 }, 00:15:36.986 { 00:15:36.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.986 "dma_device_type": 2 00:15:36.986 } 00:15:36.986 ], 00:15:36.986 "driver_specific": { 00:15:36.986 "raid": { 00:15:36.986 "uuid": "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129", 00:15:36.986 "strip_size_kb": 0, 00:15:36.986 "state": "online", 00:15:36.986 "raid_level": "raid1", 00:15:36.986 "superblock": true, 00:15:36.986 "num_base_bdevs": 2, 00:15:36.986 "num_base_bdevs_discovered": 2, 00:15:36.986 "num_base_bdevs_operational": 2, 00:15:36.986 "base_bdevs_list": [ 00:15:36.986 { 00:15:36.986 "name": "pt1", 00:15:36.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.986 "is_configured": true, 00:15:36.986 "data_offset": 256, 00:15:36.986 "data_size": 7936 00:15:36.986 }, 00:15:36.986 { 00:15:36.986 "name": "pt2", 00:15:36.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.986 "is_configured": true, 00:15:36.986 "data_offset": 256, 00:15:36.986 "data_size": 7936 00:15:36.986 } 00:15:36.986 ] 00:15:36.986 } 00:15:36.986 } 00:15:36.986 }' 00:15:36.986 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:37.246 pt2' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 [2024-11-18 23:10:56.507859] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' a2e2d3f4-b4a1-4ded-8446-7c7ef8736129 '!=' a2e2d3f4-b4a1-4ded-8446-7c7ef8736129 ']' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 [2024-11-18 23:10:56.551581] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.246 "name": "raid_bdev1", 00:15:37.246 "uuid": "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129", 00:15:37.246 "strip_size_kb": 0, 00:15:37.246 "state": "online", 00:15:37.246 "raid_level": "raid1", 00:15:37.246 "superblock": true, 00:15:37.246 "num_base_bdevs": 2, 00:15:37.246 "num_base_bdevs_discovered": 1, 00:15:37.246 "num_base_bdevs_operational": 1, 00:15:37.246 "base_bdevs_list": [ 00:15:37.246 { 00:15:37.246 "name": null, 00:15:37.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.246 "is_configured": false, 00:15:37.246 "data_offset": 0, 00:15:37.246 "data_size": 7936 00:15:37.246 }, 00:15:37.246 { 00:15:37.246 "name": "pt2", 00:15:37.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.246 "is_configured": true, 00:15:37.246 "data_offset": 256, 00:15:37.246 "data_size": 7936 00:15:37.246 } 00:15:37.246 ] 00:15:37.246 }' 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.246 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.816 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.816 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.816 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.816 [2024-11-18 23:10:56.970993] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.816 [2024-11-18 23:10:56.971018] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.816 [2024-11-18 23:10:56.971076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.816 [2024-11-18 23:10:56.971112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.816 [2024-11-18 23:10:56.971120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:37.816 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.816 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.816 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.816 23:10:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:37.816 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.816 23:10:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.816 [2024-11-18 23:10:57.046866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:37.816 [2024-11-18 23:10:57.046909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.816 [2024-11-18 23:10:57.046941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:37.816 [2024-11-18 23:10:57.046949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.816 [2024-11-18 23:10:57.048989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.816 [2024-11-18 23:10:57.049025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:37.816 [2024-11-18 23:10:57.049088] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:37.816 [2024-11-18 23:10:57.049114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.816 [2024-11-18 23:10:57.049177] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:37.816 [2024-11-18 23:10:57.049185] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:37.816 [2024-11-18 23:10:57.049405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:37.816 [2024-11-18 23:10:57.049526] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:37.816 [2024-11-18 23:10:57.049546] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:37.816 [2024-11-18 23:10:57.049645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.816 pt2 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.816 "name": "raid_bdev1", 00:15:37.816 "uuid": "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129", 00:15:37.816 "strip_size_kb": 0, 00:15:37.816 "state": "online", 00:15:37.816 "raid_level": "raid1", 00:15:37.816 "superblock": true, 00:15:37.816 "num_base_bdevs": 2, 00:15:37.816 "num_base_bdevs_discovered": 1, 00:15:37.816 "num_base_bdevs_operational": 1, 00:15:37.816 "base_bdevs_list": [ 00:15:37.816 { 00:15:37.816 "name": null, 00:15:37.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.816 "is_configured": false, 00:15:37.816 "data_offset": 256, 00:15:37.816 "data_size": 7936 00:15:37.816 }, 00:15:37.816 { 00:15:37.816 "name": "pt2", 00:15:37.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.816 "is_configured": true, 00:15:37.816 "data_offset": 256, 00:15:37.816 "data_size": 7936 00:15:37.816 } 00:15:37.816 ] 00:15:37.816 }' 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.816 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.395 [2024-11-18 23:10:57.558005] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.395 [2024-11-18 23:10:57.558025] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.395 [2024-11-18 23:10:57.558068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.395 [2024-11-18 23:10:57.558100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.395 [2024-11-18 23:10:57.558109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.395 [2024-11-18 23:10:57.617884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:38.395 [2024-11-18 23:10:57.617929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.395 [2024-11-18 23:10:57.617964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:38.395 [2024-11-18 23:10:57.617978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.395 [2024-11-18 23:10:57.619890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.395 [2024-11-18 23:10:57.619926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:38.395 [2024-11-18 23:10:57.619979] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:38.395 [2024-11-18 23:10:57.620013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:38.395 [2024-11-18 23:10:57.620112] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:38.395 [2024-11-18 23:10:57.620124] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.395 [2024-11-18 23:10:57.620138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:38.395 [2024-11-18 23:10:57.620185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:38.395 [2024-11-18 23:10:57.620240] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:38.395 [2024-11-18 23:10:57.620249] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:38.395 [2024-11-18 23:10:57.620536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:38.395 [2024-11-18 23:10:57.620687] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:38.395 [2024-11-18 23:10:57.620732] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:38.395 [2024-11-18 23:10:57.620895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.395 pt1 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.395 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.395 "name": "raid_bdev1", 00:15:38.395 "uuid": "a2e2d3f4-b4a1-4ded-8446-7c7ef8736129", 00:15:38.395 "strip_size_kb": 0, 00:15:38.395 "state": "online", 00:15:38.395 "raid_level": "raid1", 00:15:38.395 "superblock": true, 00:15:38.395 "num_base_bdevs": 2, 00:15:38.395 "num_base_bdevs_discovered": 1, 00:15:38.395 "num_base_bdevs_operational": 1, 00:15:38.395 "base_bdevs_list": [ 00:15:38.395 { 00:15:38.395 "name": null, 00:15:38.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.395 "is_configured": false, 00:15:38.395 "data_offset": 256, 00:15:38.395 "data_size": 7936 00:15:38.395 }, 00:15:38.395 { 00:15:38.395 "name": "pt2", 00:15:38.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.396 "is_configured": true, 00:15:38.396 "data_offset": 256, 00:15:38.396 "data_size": 7936 00:15:38.396 } 00:15:38.396 ] 00:15:38.396 }' 00:15:38.396 23:10:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.396 23:10:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.967 23:10:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:38.967 23:10:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:38.967 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.967 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.968 [2024-11-18 23:10:58.141195] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' a2e2d3f4-b4a1-4ded-8446-7c7ef8736129 '!=' a2e2d3f4-b4a1-4ded-8446-7c7ef8736129 ']' 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96503 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96503 ']' 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96503 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96503 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96503' 00:15:38.968 killing process with pid 96503 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96503 00:15:38.968 [2024-11-18 23:10:58.206979] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.968 [2024-11-18 23:10:58.207035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.968 [2024-11-18 23:10:58.207073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.968 [2024-11-18 23:10:58.207080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:38.968 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96503 00:15:38.968 [2024-11-18 23:10:58.229220] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:39.229 23:10:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:39.229 00:15:39.229 real 0m5.106s 00:15:39.229 user 0m8.306s 00:15:39.229 sys 0m1.157s 00:15:39.229 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.229 ************************************ 00:15:39.229 END TEST raid_superblock_test_4k 00:15:39.229 ************************************ 00:15:39.229 23:10:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.229 23:10:58 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:39.229 23:10:58 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:39.229 23:10:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:39.229 23:10:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.229 23:10:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:39.229 ************************************ 00:15:39.229 START TEST raid_rebuild_test_sb_4k 00:15:39.229 ************************************ 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96816 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96816 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96816 ']' 00:15:39.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:39.229 23:10:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.488 [2024-11-18 23:10:58.667947] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:39.488 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:39.488 Zero copy mechanism will not be used. 00:15:39.488 [2024-11-18 23:10:58.668163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96816 ] 00:15:39.488 [2024-11-18 23:10:58.834241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.748 [2024-11-18 23:10:58.882113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.748 [2024-11-18 23:10:58.923625] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.748 [2024-11-18 23:10:58.923660] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.318 BaseBdev1_malloc 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.318 [2024-11-18 23:10:59.505342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:40.318 [2024-11-18 23:10:59.505397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.318 [2024-11-18 23:10:59.505439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:40.318 [2024-11-18 23:10:59.505452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.318 [2024-11-18 23:10:59.507524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.318 [2024-11-18 23:10:59.507618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:40.318 BaseBdev1 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.318 BaseBdev2_malloc 00:15:40.318 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.319 [2024-11-18 23:10:59.549859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:40.319 [2024-11-18 23:10:59.549963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.319 [2024-11-18 23:10:59.550008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:40.319 [2024-11-18 23:10:59.550029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.319 [2024-11-18 23:10:59.554733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.319 [2024-11-18 23:10:59.554806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:40.319 BaseBdev2 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.319 spare_malloc 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.319 spare_delay 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.319 [2024-11-18 23:10:59.592857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:40.319 [2024-11-18 23:10:59.592906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.319 [2024-11-18 23:10:59.592926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:40.319 [2024-11-18 23:10:59.592935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.319 [2024-11-18 23:10:59.594975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.319 [2024-11-18 23:10:59.595059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:40.319 spare 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.319 [2024-11-18 23:10:59.604877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.319 [2024-11-18 23:10:59.606668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.319 [2024-11-18 23:10:59.606812] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:40.319 [2024-11-18 23:10:59.606830] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:40.319 [2024-11-18 23:10:59.607079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:40.319 [2024-11-18 23:10:59.607193] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:40.319 [2024-11-18 23:10:59.607205] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:40.319 [2024-11-18 23:10:59.607325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.319 "name": "raid_bdev1", 00:15:40.319 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:40.319 "strip_size_kb": 0, 00:15:40.319 "state": "online", 00:15:40.319 "raid_level": "raid1", 00:15:40.319 "superblock": true, 00:15:40.319 "num_base_bdevs": 2, 00:15:40.319 "num_base_bdevs_discovered": 2, 00:15:40.319 "num_base_bdevs_operational": 2, 00:15:40.319 "base_bdevs_list": [ 00:15:40.319 { 00:15:40.319 "name": "BaseBdev1", 00:15:40.319 "uuid": "f3a4effa-170c-5529-96b8-99d6715cd90d", 00:15:40.319 "is_configured": true, 00:15:40.319 "data_offset": 256, 00:15:40.319 "data_size": 7936 00:15:40.319 }, 00:15:40.319 { 00:15:40.319 "name": "BaseBdev2", 00:15:40.319 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:40.319 "is_configured": true, 00:15:40.319 "data_offset": 256, 00:15:40.319 "data_size": 7936 00:15:40.319 } 00:15:40.319 ] 00:15:40.319 }' 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.319 23:10:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.887 [2024-11-18 23:11:00.064322] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.887 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:41.146 [2024-11-18 23:11:00.331610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:41.146 /dev/nbd0 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.146 1+0 records in 00:15:41.146 1+0 records out 00:15:41.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382259 s, 10.7 MB/s 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.146 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:41.147 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:41.147 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.147 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:41.147 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:41.147 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:41.147 23:11:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:41.713 7936+0 records in 00:15:41.713 7936+0 records out 00:15:41.713 32505856 bytes (33 MB, 31 MiB) copied, 0.607233 s, 53.5 MB/s 00:15:41.713 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:41.713 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.713 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:41.713 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.713 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:41.713 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.713 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:41.973 [2024-11-18 23:11:01.240252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.973 [2024-11-18 23:11:01.272271] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.973 "name": "raid_bdev1", 00:15:41.973 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:41.973 "strip_size_kb": 0, 00:15:41.973 "state": "online", 00:15:41.973 "raid_level": "raid1", 00:15:41.973 "superblock": true, 00:15:41.973 "num_base_bdevs": 2, 00:15:41.973 "num_base_bdevs_discovered": 1, 00:15:41.973 "num_base_bdevs_operational": 1, 00:15:41.973 "base_bdevs_list": [ 00:15:41.973 { 00:15:41.973 "name": null, 00:15:41.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.973 "is_configured": false, 00:15:41.973 "data_offset": 0, 00:15:41.973 "data_size": 7936 00:15:41.973 }, 00:15:41.973 { 00:15:41.973 "name": "BaseBdev2", 00:15:41.973 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:41.973 "is_configured": true, 00:15:41.973 "data_offset": 256, 00:15:41.973 "data_size": 7936 00:15:41.973 } 00:15:41.973 ] 00:15:41.973 }' 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.973 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.541 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.541 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.541 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.541 [2024-11-18 23:11:01.747485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.541 [2024-11-18 23:11:01.751627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:15:42.541 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.541 23:11:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:42.541 [2024-11-18 23:11:01.753538] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.479 "name": "raid_bdev1", 00:15:43.479 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:43.479 "strip_size_kb": 0, 00:15:43.479 "state": "online", 00:15:43.479 "raid_level": "raid1", 00:15:43.479 "superblock": true, 00:15:43.479 "num_base_bdevs": 2, 00:15:43.479 "num_base_bdevs_discovered": 2, 00:15:43.479 "num_base_bdevs_operational": 2, 00:15:43.479 "process": { 00:15:43.479 "type": "rebuild", 00:15:43.479 "target": "spare", 00:15:43.479 "progress": { 00:15:43.479 "blocks": 2560, 00:15:43.479 "percent": 32 00:15:43.479 } 00:15:43.479 }, 00:15:43.479 "base_bdevs_list": [ 00:15:43.479 { 00:15:43.479 "name": "spare", 00:15:43.479 "uuid": "1134e0b7-ca04-564e-b858-3f48c9e1d643", 00:15:43.479 "is_configured": true, 00:15:43.479 "data_offset": 256, 00:15:43.479 "data_size": 7936 00:15:43.479 }, 00:15:43.479 { 00:15:43.479 "name": "BaseBdev2", 00:15:43.479 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:43.479 "is_configured": true, 00:15:43.479 "data_offset": 256, 00:15:43.479 "data_size": 7936 00:15:43.479 } 00:15:43.479 ] 00:15:43.479 }' 00:15:43.479 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.739 [2024-11-18 23:11:02.910618] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.739 [2024-11-18 23:11:02.957946] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:43.739 [2024-11-18 23:11:02.957997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.739 [2024-11-18 23:11:02.958015] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.739 [2024-11-18 23:11:02.958023] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.739 23:11:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.739 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.739 "name": "raid_bdev1", 00:15:43.739 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:43.739 "strip_size_kb": 0, 00:15:43.739 "state": "online", 00:15:43.739 "raid_level": "raid1", 00:15:43.739 "superblock": true, 00:15:43.739 "num_base_bdevs": 2, 00:15:43.739 "num_base_bdevs_discovered": 1, 00:15:43.739 "num_base_bdevs_operational": 1, 00:15:43.739 "base_bdevs_list": [ 00:15:43.739 { 00:15:43.739 "name": null, 00:15:43.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.739 "is_configured": false, 00:15:43.739 "data_offset": 0, 00:15:43.739 "data_size": 7936 00:15:43.739 }, 00:15:43.739 { 00:15:43.739 "name": "BaseBdev2", 00:15:43.739 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:43.739 "is_configured": true, 00:15:43.739 "data_offset": 256, 00:15:43.739 "data_size": 7936 00:15:43.739 } 00:15:43.739 ] 00:15:43.739 }' 00:15:43.739 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.739 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.308 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.308 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.308 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.308 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.308 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.308 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.308 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.308 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.308 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.308 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.308 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.308 "name": "raid_bdev1", 00:15:44.308 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:44.308 "strip_size_kb": 0, 00:15:44.308 "state": "online", 00:15:44.308 "raid_level": "raid1", 00:15:44.308 "superblock": true, 00:15:44.308 "num_base_bdevs": 2, 00:15:44.308 "num_base_bdevs_discovered": 1, 00:15:44.308 "num_base_bdevs_operational": 1, 00:15:44.308 "base_bdevs_list": [ 00:15:44.308 { 00:15:44.308 "name": null, 00:15:44.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.308 "is_configured": false, 00:15:44.308 "data_offset": 0, 00:15:44.308 "data_size": 7936 00:15:44.308 }, 00:15:44.308 { 00:15:44.308 "name": "BaseBdev2", 00:15:44.308 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:44.308 "is_configured": true, 00:15:44.308 "data_offset": 256, 00:15:44.308 "data_size": 7936 00:15:44.308 } 00:15:44.308 ] 00:15:44.308 }' 00:15:44.309 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.309 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.309 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.309 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.309 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:44.309 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.309 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.309 [2024-11-18 23:11:03.549270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.309 [2024-11-18 23:11:03.553171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:15:44.309 [2024-11-18 23:11:03.555019] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.309 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.309 23:11:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.247 "name": "raid_bdev1", 00:15:45.247 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:45.247 "strip_size_kb": 0, 00:15:45.247 "state": "online", 00:15:45.247 "raid_level": "raid1", 00:15:45.247 "superblock": true, 00:15:45.247 "num_base_bdevs": 2, 00:15:45.247 "num_base_bdevs_discovered": 2, 00:15:45.247 "num_base_bdevs_operational": 2, 00:15:45.247 "process": { 00:15:45.247 "type": "rebuild", 00:15:45.247 "target": "spare", 00:15:45.247 "progress": { 00:15:45.247 "blocks": 2560, 00:15:45.247 "percent": 32 00:15:45.247 } 00:15:45.247 }, 00:15:45.247 "base_bdevs_list": [ 00:15:45.247 { 00:15:45.247 "name": "spare", 00:15:45.247 "uuid": "1134e0b7-ca04-564e-b858-3f48c9e1d643", 00:15:45.247 "is_configured": true, 00:15:45.247 "data_offset": 256, 00:15:45.247 "data_size": 7936 00:15:45.247 }, 00:15:45.247 { 00:15:45.247 "name": "BaseBdev2", 00:15:45.247 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:45.247 "is_configured": true, 00:15:45.247 "data_offset": 256, 00:15:45.247 "data_size": 7936 00:15:45.247 } 00:15:45.247 ] 00:15:45.247 }' 00:15:45.247 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:45.506 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=560 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.506 "name": "raid_bdev1", 00:15:45.506 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:45.506 "strip_size_kb": 0, 00:15:45.506 "state": "online", 00:15:45.506 "raid_level": "raid1", 00:15:45.506 "superblock": true, 00:15:45.506 "num_base_bdevs": 2, 00:15:45.506 "num_base_bdevs_discovered": 2, 00:15:45.506 "num_base_bdevs_operational": 2, 00:15:45.506 "process": { 00:15:45.506 "type": "rebuild", 00:15:45.506 "target": "spare", 00:15:45.506 "progress": { 00:15:45.506 "blocks": 2816, 00:15:45.506 "percent": 35 00:15:45.506 } 00:15:45.506 }, 00:15:45.506 "base_bdevs_list": [ 00:15:45.506 { 00:15:45.506 "name": "spare", 00:15:45.506 "uuid": "1134e0b7-ca04-564e-b858-3f48c9e1d643", 00:15:45.506 "is_configured": true, 00:15:45.506 "data_offset": 256, 00:15:45.506 "data_size": 7936 00:15:45.506 }, 00:15:45.506 { 00:15:45.506 "name": "BaseBdev2", 00:15:45.506 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:45.506 "is_configured": true, 00:15:45.506 "data_offset": 256, 00:15:45.506 "data_size": 7936 00:15:45.506 } 00:15:45.506 ] 00:15:45.506 }' 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.506 23:11:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.884 "name": "raid_bdev1", 00:15:46.884 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:46.884 "strip_size_kb": 0, 00:15:46.884 "state": "online", 00:15:46.884 "raid_level": "raid1", 00:15:46.884 "superblock": true, 00:15:46.884 "num_base_bdevs": 2, 00:15:46.884 "num_base_bdevs_discovered": 2, 00:15:46.884 "num_base_bdevs_operational": 2, 00:15:46.884 "process": { 00:15:46.884 "type": "rebuild", 00:15:46.884 "target": "spare", 00:15:46.884 "progress": { 00:15:46.884 "blocks": 5888, 00:15:46.884 "percent": 74 00:15:46.884 } 00:15:46.884 }, 00:15:46.884 "base_bdevs_list": [ 00:15:46.884 { 00:15:46.884 "name": "spare", 00:15:46.884 "uuid": "1134e0b7-ca04-564e-b858-3f48c9e1d643", 00:15:46.884 "is_configured": true, 00:15:46.884 "data_offset": 256, 00:15:46.884 "data_size": 7936 00:15:46.884 }, 00:15:46.884 { 00:15:46.884 "name": "BaseBdev2", 00:15:46.884 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:46.884 "is_configured": true, 00:15:46.884 "data_offset": 256, 00:15:46.884 "data_size": 7936 00:15:46.884 } 00:15:46.884 ] 00:15:46.884 }' 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.884 23:11:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.884 23:11:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.884 23:11:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.451 [2024-11-18 23:11:06.664919] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:47.451 [2024-11-18 23:11:06.664991] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:47.451 [2024-11-18 23:11:06.665093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.709 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.709 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.709 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.709 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.709 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.709 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.709 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.709 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.709 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.709 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.709 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.970 "name": "raid_bdev1", 00:15:47.970 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:47.970 "strip_size_kb": 0, 00:15:47.970 "state": "online", 00:15:47.970 "raid_level": "raid1", 00:15:47.970 "superblock": true, 00:15:47.970 "num_base_bdevs": 2, 00:15:47.970 "num_base_bdevs_discovered": 2, 00:15:47.970 "num_base_bdevs_operational": 2, 00:15:47.970 "base_bdevs_list": [ 00:15:47.970 { 00:15:47.970 "name": "spare", 00:15:47.970 "uuid": "1134e0b7-ca04-564e-b858-3f48c9e1d643", 00:15:47.970 "is_configured": true, 00:15:47.970 "data_offset": 256, 00:15:47.970 "data_size": 7936 00:15:47.970 }, 00:15:47.970 { 00:15:47.970 "name": "BaseBdev2", 00:15:47.970 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:47.970 "is_configured": true, 00:15:47.970 "data_offset": 256, 00:15:47.970 "data_size": 7936 00:15:47.970 } 00:15:47.970 ] 00:15:47.970 }' 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.970 "name": "raid_bdev1", 00:15:47.970 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:47.970 "strip_size_kb": 0, 00:15:47.970 "state": "online", 00:15:47.970 "raid_level": "raid1", 00:15:47.970 "superblock": true, 00:15:47.970 "num_base_bdevs": 2, 00:15:47.970 "num_base_bdevs_discovered": 2, 00:15:47.970 "num_base_bdevs_operational": 2, 00:15:47.970 "base_bdevs_list": [ 00:15:47.970 { 00:15:47.970 "name": "spare", 00:15:47.970 "uuid": "1134e0b7-ca04-564e-b858-3f48c9e1d643", 00:15:47.970 "is_configured": true, 00:15:47.970 "data_offset": 256, 00:15:47.970 "data_size": 7936 00:15:47.970 }, 00:15:47.970 { 00:15:47.970 "name": "BaseBdev2", 00:15:47.970 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:47.970 "is_configured": true, 00:15:47.970 "data_offset": 256, 00:15:47.970 "data_size": 7936 00:15:47.970 } 00:15:47.970 ] 00:15:47.970 }' 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.970 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.229 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.229 "name": "raid_bdev1", 00:15:48.229 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:48.229 "strip_size_kb": 0, 00:15:48.229 "state": "online", 00:15:48.229 "raid_level": "raid1", 00:15:48.229 "superblock": true, 00:15:48.229 "num_base_bdevs": 2, 00:15:48.229 "num_base_bdevs_discovered": 2, 00:15:48.229 "num_base_bdevs_operational": 2, 00:15:48.229 "base_bdevs_list": [ 00:15:48.229 { 00:15:48.229 "name": "spare", 00:15:48.229 "uuid": "1134e0b7-ca04-564e-b858-3f48c9e1d643", 00:15:48.229 "is_configured": true, 00:15:48.229 "data_offset": 256, 00:15:48.229 "data_size": 7936 00:15:48.229 }, 00:15:48.229 { 00:15:48.229 "name": "BaseBdev2", 00:15:48.229 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:48.229 "is_configured": true, 00:15:48.229 "data_offset": 256, 00:15:48.229 "data_size": 7936 00:15:48.229 } 00:15:48.229 ] 00:15:48.229 }' 00:15:48.229 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.229 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.489 [2024-11-18 23:11:07.779047] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.489 [2024-11-18 23:11:07.779073] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.489 [2024-11-18 23:11:07.779144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.489 [2024-11-18 23:11:07.779209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.489 [2024-11-18 23:11:07.779225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:48.489 23:11:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:48.769 /dev/nbd0 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.769 1+0 records in 00:15:48.769 1+0 records out 00:15:48.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032235 s, 12.7 MB/s 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:48.769 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:49.059 /dev/nbd1 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.059 1+0 records in 00:15:49.059 1+0 records out 00:15:49.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356961 s, 11.5 MB/s 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.059 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:49.060 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:49.060 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:49.060 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.060 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:49.319 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:49.319 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:49.319 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:49.319 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.319 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.319 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:49.319 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:49.319 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.319 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.319 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.579 [2024-11-18 23:11:08.831494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:49.579 [2024-11-18 23:11:08.831546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.579 [2024-11-18 23:11:08.831566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:49.579 [2024-11-18 23:11:08.831578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.579 [2024-11-18 23:11:08.833713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.579 [2024-11-18 23:11:08.833810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:49.579 [2024-11-18 23:11:08.833891] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:49.579 [2024-11-18 23:11:08.833940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.579 [2024-11-18 23:11:08.834049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.579 spare 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.579 [2024-11-18 23:11:08.933940] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:49.579 [2024-11-18 23:11:08.933971] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:49.579 [2024-11-18 23:11:08.934226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:15:49.579 [2024-11-18 23:11:08.934387] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:49.579 [2024-11-18 23:11:08.934402] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:49.579 [2024-11-18 23:11:08.934551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.579 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.838 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.838 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.838 "name": "raid_bdev1", 00:15:49.838 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:49.838 "strip_size_kb": 0, 00:15:49.838 "state": "online", 00:15:49.838 "raid_level": "raid1", 00:15:49.838 "superblock": true, 00:15:49.838 "num_base_bdevs": 2, 00:15:49.838 "num_base_bdevs_discovered": 2, 00:15:49.838 "num_base_bdevs_operational": 2, 00:15:49.838 "base_bdevs_list": [ 00:15:49.838 { 00:15:49.838 "name": "spare", 00:15:49.838 "uuid": "1134e0b7-ca04-564e-b858-3f48c9e1d643", 00:15:49.838 "is_configured": true, 00:15:49.838 "data_offset": 256, 00:15:49.838 "data_size": 7936 00:15:49.838 }, 00:15:49.838 { 00:15:49.838 "name": "BaseBdev2", 00:15:49.838 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:49.838 "is_configured": true, 00:15:49.838 "data_offset": 256, 00:15:49.838 "data_size": 7936 00:15:49.838 } 00:15:49.838 ] 00:15:49.838 }' 00:15:49.838 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.838 23:11:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.113 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.113 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.113 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.113 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.113 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.113 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.113 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.113 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.113 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.113 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.113 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.113 "name": "raid_bdev1", 00:15:50.113 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:50.113 "strip_size_kb": 0, 00:15:50.113 "state": "online", 00:15:50.113 "raid_level": "raid1", 00:15:50.113 "superblock": true, 00:15:50.113 "num_base_bdevs": 2, 00:15:50.113 "num_base_bdevs_discovered": 2, 00:15:50.114 "num_base_bdevs_operational": 2, 00:15:50.114 "base_bdevs_list": [ 00:15:50.114 { 00:15:50.114 "name": "spare", 00:15:50.114 "uuid": "1134e0b7-ca04-564e-b858-3f48c9e1d643", 00:15:50.114 "is_configured": true, 00:15:50.114 "data_offset": 256, 00:15:50.114 "data_size": 7936 00:15:50.114 }, 00:15:50.114 { 00:15:50.114 "name": "BaseBdev2", 00:15:50.114 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:50.114 "is_configured": true, 00:15:50.114 "data_offset": 256, 00:15:50.114 "data_size": 7936 00:15:50.114 } 00:15:50.114 ] 00:15:50.114 }' 00:15:50.114 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.378 [2024-11-18 23:11:09.622138] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.378 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.378 "name": "raid_bdev1", 00:15:50.378 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:50.378 "strip_size_kb": 0, 00:15:50.378 "state": "online", 00:15:50.378 "raid_level": "raid1", 00:15:50.378 "superblock": true, 00:15:50.378 "num_base_bdevs": 2, 00:15:50.378 "num_base_bdevs_discovered": 1, 00:15:50.378 "num_base_bdevs_operational": 1, 00:15:50.378 "base_bdevs_list": [ 00:15:50.378 { 00:15:50.378 "name": null, 00:15:50.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.378 "is_configured": false, 00:15:50.378 "data_offset": 0, 00:15:50.378 "data_size": 7936 00:15:50.378 }, 00:15:50.378 { 00:15:50.378 "name": "BaseBdev2", 00:15:50.378 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:50.378 "is_configured": true, 00:15:50.379 "data_offset": 256, 00:15:50.379 "data_size": 7936 00:15:50.379 } 00:15:50.379 ] 00:15:50.379 }' 00:15:50.379 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.379 23:11:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.947 23:11:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:50.947 23:11:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.947 23:11:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.947 [2024-11-18 23:11:10.069402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.947 [2024-11-18 23:11:10.069581] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:50.947 [2024-11-18 23:11:10.069645] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:50.947 [2024-11-18 23:11:10.069734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.947 [2024-11-18 23:11:10.073635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:15:50.947 23:11:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.947 23:11:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:50.947 [2024-11-18 23:11:10.075546] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:51.885 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.885 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.885 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.885 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.885 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.885 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.885 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.885 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.885 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.886 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.886 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.886 "name": "raid_bdev1", 00:15:51.886 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:51.886 "strip_size_kb": 0, 00:15:51.886 "state": "online", 00:15:51.886 "raid_level": "raid1", 00:15:51.886 "superblock": true, 00:15:51.886 "num_base_bdevs": 2, 00:15:51.886 "num_base_bdevs_discovered": 2, 00:15:51.886 "num_base_bdevs_operational": 2, 00:15:51.886 "process": { 00:15:51.886 "type": "rebuild", 00:15:51.886 "target": "spare", 00:15:51.886 "progress": { 00:15:51.886 "blocks": 2560, 00:15:51.886 "percent": 32 00:15:51.886 } 00:15:51.886 }, 00:15:51.886 "base_bdevs_list": [ 00:15:51.886 { 00:15:51.886 "name": "spare", 00:15:51.886 "uuid": "1134e0b7-ca04-564e-b858-3f48c9e1d643", 00:15:51.886 "is_configured": true, 00:15:51.886 "data_offset": 256, 00:15:51.886 "data_size": 7936 00:15:51.886 }, 00:15:51.886 { 00:15:51.886 "name": "BaseBdev2", 00:15:51.886 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:51.886 "is_configured": true, 00:15:51.886 "data_offset": 256, 00:15:51.886 "data_size": 7936 00:15:51.886 } 00:15:51.886 ] 00:15:51.886 }' 00:15:51.886 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.886 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.886 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.886 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.886 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:51.886 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.886 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.886 [2024-11-18 23:11:11.240347] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.146 [2024-11-18 23:11:11.279395] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:52.146 [2024-11-18 23:11:11.279526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.146 [2024-11-18 23:11:11.279546] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.146 [2024-11-18 23:11:11.279555] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.146 "name": "raid_bdev1", 00:15:52.146 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:52.146 "strip_size_kb": 0, 00:15:52.146 "state": "online", 00:15:52.146 "raid_level": "raid1", 00:15:52.146 "superblock": true, 00:15:52.146 "num_base_bdevs": 2, 00:15:52.146 "num_base_bdevs_discovered": 1, 00:15:52.146 "num_base_bdevs_operational": 1, 00:15:52.146 "base_bdevs_list": [ 00:15:52.146 { 00:15:52.146 "name": null, 00:15:52.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.146 "is_configured": false, 00:15:52.146 "data_offset": 0, 00:15:52.146 "data_size": 7936 00:15:52.146 }, 00:15:52.146 { 00:15:52.146 "name": "BaseBdev2", 00:15:52.146 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:52.146 "is_configured": true, 00:15:52.146 "data_offset": 256, 00:15:52.146 "data_size": 7936 00:15:52.146 } 00:15:52.146 ] 00:15:52.146 }' 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.146 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.405 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:52.405 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.406 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.406 [2024-11-18 23:11:11.766738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:52.406 [2024-11-18 23:11:11.766835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.406 [2024-11-18 23:11:11.766863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:52.406 [2024-11-18 23:11:11.766873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.406 [2024-11-18 23:11:11.767302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.406 [2024-11-18 23:11:11.767321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:52.406 [2024-11-18 23:11:11.767404] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:52.406 [2024-11-18 23:11:11.767430] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:52.406 [2024-11-18 23:11:11.767448] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:52.406 [2024-11-18 23:11:11.767468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.406 spare 00:15:52.406 [2024-11-18 23:11:11.770835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:15:52.406 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.406 23:11:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:52.406 [2024-11-18 23:11:11.772742] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.794 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.794 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.794 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.794 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.794 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.794 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.794 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.794 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.794 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.794 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.794 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.794 "name": "raid_bdev1", 00:15:53.794 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:53.794 "strip_size_kb": 0, 00:15:53.794 "state": "online", 00:15:53.794 "raid_level": "raid1", 00:15:53.794 "superblock": true, 00:15:53.794 "num_base_bdevs": 2, 00:15:53.794 "num_base_bdevs_discovered": 2, 00:15:53.794 "num_base_bdevs_operational": 2, 00:15:53.794 "process": { 00:15:53.794 "type": "rebuild", 00:15:53.794 "target": "spare", 00:15:53.794 "progress": { 00:15:53.794 "blocks": 2560, 00:15:53.794 "percent": 32 00:15:53.794 } 00:15:53.794 }, 00:15:53.794 "base_bdevs_list": [ 00:15:53.794 { 00:15:53.794 "name": "spare", 00:15:53.794 "uuid": "1134e0b7-ca04-564e-b858-3f48c9e1d643", 00:15:53.794 "is_configured": true, 00:15:53.794 "data_offset": 256, 00:15:53.794 "data_size": 7936 00:15:53.794 }, 00:15:53.794 { 00:15:53.794 "name": "BaseBdev2", 00:15:53.794 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:53.794 "is_configured": true, 00:15:53.794 "data_offset": 256, 00:15:53.794 "data_size": 7936 00:15:53.794 } 00:15:53.794 ] 00:15:53.794 }' 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.795 [2024-11-18 23:11:12.937444] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.795 [2024-11-18 23:11:12.976495] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:53.795 [2024-11-18 23:11:12.976610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.795 [2024-11-18 23:11:12.976643] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.795 [2024-11-18 23:11:12.976653] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.795 23:11:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.795 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.795 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.795 "name": "raid_bdev1", 00:15:53.795 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:53.795 "strip_size_kb": 0, 00:15:53.795 "state": "online", 00:15:53.795 "raid_level": "raid1", 00:15:53.795 "superblock": true, 00:15:53.795 "num_base_bdevs": 2, 00:15:53.795 "num_base_bdevs_discovered": 1, 00:15:53.795 "num_base_bdevs_operational": 1, 00:15:53.795 "base_bdevs_list": [ 00:15:53.795 { 00:15:53.795 "name": null, 00:15:53.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.795 "is_configured": false, 00:15:53.795 "data_offset": 0, 00:15:53.795 "data_size": 7936 00:15:53.795 }, 00:15:53.795 { 00:15:53.795 "name": "BaseBdev2", 00:15:53.795 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:53.795 "is_configured": true, 00:15:53.795 "data_offset": 256, 00:15:53.795 "data_size": 7936 00:15:53.795 } 00:15:53.795 ] 00:15:53.795 }' 00:15:53.795 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.795 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.364 "name": "raid_bdev1", 00:15:54.364 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:54.364 "strip_size_kb": 0, 00:15:54.364 "state": "online", 00:15:54.364 "raid_level": "raid1", 00:15:54.364 "superblock": true, 00:15:54.364 "num_base_bdevs": 2, 00:15:54.364 "num_base_bdevs_discovered": 1, 00:15:54.364 "num_base_bdevs_operational": 1, 00:15:54.364 "base_bdevs_list": [ 00:15:54.364 { 00:15:54.364 "name": null, 00:15:54.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.364 "is_configured": false, 00:15:54.364 "data_offset": 0, 00:15:54.364 "data_size": 7936 00:15:54.364 }, 00:15:54.364 { 00:15:54.364 "name": "BaseBdev2", 00:15:54.364 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:54.364 "is_configured": true, 00:15:54.364 "data_offset": 256, 00:15:54.364 "data_size": 7936 00:15:54.364 } 00:15:54.364 ] 00:15:54.364 }' 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.364 [2024-11-18 23:11:13.611532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:54.364 [2024-11-18 23:11:13.611581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.364 [2024-11-18 23:11:13.611600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:54.364 [2024-11-18 23:11:13.611610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.364 [2024-11-18 23:11:13.611970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.364 [2024-11-18 23:11:13.611991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.364 [2024-11-18 23:11:13.612052] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:54.364 [2024-11-18 23:11:13.612069] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:54.364 [2024-11-18 23:11:13.612079] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:54.364 [2024-11-18 23:11:13.612091] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:54.364 BaseBdev1 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.364 23:11:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.303 "name": "raid_bdev1", 00:15:55.303 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:55.303 "strip_size_kb": 0, 00:15:55.303 "state": "online", 00:15:55.303 "raid_level": "raid1", 00:15:55.303 "superblock": true, 00:15:55.303 "num_base_bdevs": 2, 00:15:55.303 "num_base_bdevs_discovered": 1, 00:15:55.303 "num_base_bdevs_operational": 1, 00:15:55.303 "base_bdevs_list": [ 00:15:55.303 { 00:15:55.303 "name": null, 00:15:55.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.303 "is_configured": false, 00:15:55.303 "data_offset": 0, 00:15:55.303 "data_size": 7936 00:15:55.303 }, 00:15:55.303 { 00:15:55.303 "name": "BaseBdev2", 00:15:55.303 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:55.303 "is_configured": true, 00:15:55.303 "data_offset": 256, 00:15:55.303 "data_size": 7936 00:15:55.303 } 00:15:55.303 ] 00:15:55.303 }' 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.303 23:11:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.887 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:55.887 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.887 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:55.887 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:55.887 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.887 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.887 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.887 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.887 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.887 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.887 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.887 "name": "raid_bdev1", 00:15:55.887 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:55.887 "strip_size_kb": 0, 00:15:55.887 "state": "online", 00:15:55.887 "raid_level": "raid1", 00:15:55.888 "superblock": true, 00:15:55.888 "num_base_bdevs": 2, 00:15:55.888 "num_base_bdevs_discovered": 1, 00:15:55.888 "num_base_bdevs_operational": 1, 00:15:55.888 "base_bdevs_list": [ 00:15:55.888 { 00:15:55.888 "name": null, 00:15:55.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.888 "is_configured": false, 00:15:55.888 "data_offset": 0, 00:15:55.888 "data_size": 7936 00:15:55.888 }, 00:15:55.888 { 00:15:55.888 "name": "BaseBdev2", 00:15:55.888 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:55.888 "is_configured": true, 00:15:55.888 "data_offset": 256, 00:15:55.888 "data_size": 7936 00:15:55.888 } 00:15:55.888 ] 00:15:55.888 }' 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.888 [2024-11-18 23:11:15.200822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.888 [2024-11-18 23:11:15.200999] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:55.888 [2024-11-18 23:11:15.201019] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:55.888 request: 00:15:55.888 { 00:15:55.888 "base_bdev": "BaseBdev1", 00:15:55.888 "raid_bdev": "raid_bdev1", 00:15:55.888 "method": "bdev_raid_add_base_bdev", 00:15:55.888 "req_id": 1 00:15:55.888 } 00:15:55.888 Got JSON-RPC error response 00:15:55.888 response: 00:15:55.888 { 00:15:55.888 "code": -22, 00:15:55.888 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:55.888 } 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:55.888 23:11:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:57.267 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.268 "name": "raid_bdev1", 00:15:57.268 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:57.268 "strip_size_kb": 0, 00:15:57.268 "state": "online", 00:15:57.268 "raid_level": "raid1", 00:15:57.268 "superblock": true, 00:15:57.268 "num_base_bdevs": 2, 00:15:57.268 "num_base_bdevs_discovered": 1, 00:15:57.268 "num_base_bdevs_operational": 1, 00:15:57.268 "base_bdevs_list": [ 00:15:57.268 { 00:15:57.268 "name": null, 00:15:57.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.268 "is_configured": false, 00:15:57.268 "data_offset": 0, 00:15:57.268 "data_size": 7936 00:15:57.268 }, 00:15:57.268 { 00:15:57.268 "name": "BaseBdev2", 00:15:57.268 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:57.268 "is_configured": true, 00:15:57.268 "data_offset": 256, 00:15:57.268 "data_size": 7936 00:15:57.268 } 00:15:57.268 ] 00:15:57.268 }' 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.268 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.528 "name": "raid_bdev1", 00:15:57.528 "uuid": "4a9f00dd-88dd-48e7-b013-bc9eb635db4f", 00:15:57.528 "strip_size_kb": 0, 00:15:57.528 "state": "online", 00:15:57.528 "raid_level": "raid1", 00:15:57.528 "superblock": true, 00:15:57.528 "num_base_bdevs": 2, 00:15:57.528 "num_base_bdevs_discovered": 1, 00:15:57.528 "num_base_bdevs_operational": 1, 00:15:57.528 "base_bdevs_list": [ 00:15:57.528 { 00:15:57.528 "name": null, 00:15:57.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.528 "is_configured": false, 00:15:57.528 "data_offset": 0, 00:15:57.528 "data_size": 7936 00:15:57.528 }, 00:15:57.528 { 00:15:57.528 "name": "BaseBdev2", 00:15:57.528 "uuid": "cde623b3-6729-5dc3-bff6-ddf4be4db869", 00:15:57.528 "is_configured": true, 00:15:57.528 "data_offset": 256, 00:15:57.528 "data_size": 7936 00:15:57.528 } 00:15:57.528 ] 00:15:57.528 }' 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96816 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96816 ']' 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96816 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96816 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.528 killing process with pid 96816 00:15:57.528 Received shutdown signal, test time was about 60.000000 seconds 00:15:57.528 00:15:57.528 Latency(us) 00:15:57.528 [2024-11-18T23:11:16.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.528 [2024-11-18T23:11:16.906Z] =================================================================================================================== 00:15:57.528 [2024-11-18T23:11:16.906Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96816' 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96816 00:15:57.528 [2024-11-18 23:11:16.875048] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.528 [2024-11-18 23:11:16.875158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.528 [2024-11-18 23:11:16.875202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.528 [2024-11-18 23:11:16.875211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:57.528 23:11:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96816 00:15:57.788 [2024-11-18 23:11:16.906055] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.789 23:11:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:57.789 00:15:57.789 real 0m18.579s 00:15:57.789 user 0m24.795s 00:15:57.789 sys 0m2.679s 00:15:57.789 23:11:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.789 23:11:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.789 ************************************ 00:15:57.789 END TEST raid_rebuild_test_sb_4k 00:15:57.789 ************************************ 00:15:58.049 23:11:17 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:58.049 23:11:17 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:58.049 23:11:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:58.049 23:11:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:58.049 23:11:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.049 ************************************ 00:15:58.049 START TEST raid_state_function_test_sb_md_separate 00:15:58.049 ************************************ 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97495 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97495' 00:15:58.049 Process raid pid: 97495 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97495 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97495 ']' 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.049 23:11:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.049 [2024-11-18 23:11:17.318478] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:58.049 [2024-11-18 23:11:17.318651] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.309 [2024-11-18 23:11:17.485496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.309 [2024-11-18 23:11:17.532203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.309 [2024-11-18 23:11:17.575065] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.309 [2024-11-18 23:11:17.575182] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.877 [2024-11-18 23:11:18.148728] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.877 [2024-11-18 23:11:18.148785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.877 [2024-11-18 23:11:18.148796] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.877 [2024-11-18 23:11:18.148806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.877 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.878 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.878 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.878 "name": "Existed_Raid", 00:15:58.878 "uuid": "31050cad-d2e7-4e52-992e-4de5691c01e9", 00:15:58.878 "strip_size_kb": 0, 00:15:58.878 "state": "configuring", 00:15:58.878 "raid_level": "raid1", 00:15:58.878 "superblock": true, 00:15:58.878 "num_base_bdevs": 2, 00:15:58.878 "num_base_bdevs_discovered": 0, 00:15:58.878 "num_base_bdevs_operational": 2, 00:15:58.878 "base_bdevs_list": [ 00:15:58.878 { 00:15:58.878 "name": "BaseBdev1", 00:15:58.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.878 "is_configured": false, 00:15:58.878 "data_offset": 0, 00:15:58.878 "data_size": 0 00:15:58.878 }, 00:15:58.878 { 00:15:58.878 "name": "BaseBdev2", 00:15:58.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.878 "is_configured": false, 00:15:58.878 "data_offset": 0, 00:15:58.878 "data_size": 0 00:15:58.878 } 00:15:58.878 ] 00:15:58.878 }' 00:15:58.878 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.878 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.451 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:59.451 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.451 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.451 [2024-11-18 23:11:18.599844] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.451 [2024-11-18 23:11:18.599950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:59.451 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.452 [2024-11-18 23:11:18.611859] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.452 [2024-11-18 23:11:18.611936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.452 [2024-11-18 23:11:18.611973] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.452 [2024-11-18 23:11:18.612010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.452 [2024-11-18 23:11:18.633174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.452 BaseBdev1 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.452 [ 00:15:59.452 { 00:15:59.452 "name": "BaseBdev1", 00:15:59.452 "aliases": [ 00:15:59.452 "c2260d75-eed0-40b1-aae6-3ee2435d1e66" 00:15:59.452 ], 00:15:59.452 "product_name": "Malloc disk", 00:15:59.452 "block_size": 4096, 00:15:59.452 "num_blocks": 8192, 00:15:59.452 "uuid": "c2260d75-eed0-40b1-aae6-3ee2435d1e66", 00:15:59.452 "md_size": 32, 00:15:59.452 "md_interleave": false, 00:15:59.452 "dif_type": 0, 00:15:59.452 "assigned_rate_limits": { 00:15:59.452 "rw_ios_per_sec": 0, 00:15:59.452 "rw_mbytes_per_sec": 0, 00:15:59.452 "r_mbytes_per_sec": 0, 00:15:59.452 "w_mbytes_per_sec": 0 00:15:59.452 }, 00:15:59.452 "claimed": true, 00:15:59.452 "claim_type": "exclusive_write", 00:15:59.452 "zoned": false, 00:15:59.452 "supported_io_types": { 00:15:59.452 "read": true, 00:15:59.452 "write": true, 00:15:59.452 "unmap": true, 00:15:59.452 "flush": true, 00:15:59.452 "reset": true, 00:15:59.452 "nvme_admin": false, 00:15:59.452 "nvme_io": false, 00:15:59.452 "nvme_io_md": false, 00:15:59.452 "write_zeroes": true, 00:15:59.452 "zcopy": true, 00:15:59.452 "get_zone_info": false, 00:15:59.452 "zone_management": false, 00:15:59.452 "zone_append": false, 00:15:59.452 "compare": false, 00:15:59.452 "compare_and_write": false, 00:15:59.452 "abort": true, 00:15:59.452 "seek_hole": false, 00:15:59.452 "seek_data": false, 00:15:59.452 "copy": true, 00:15:59.452 "nvme_iov_md": false 00:15:59.452 }, 00:15:59.452 "memory_domains": [ 00:15:59.452 { 00:15:59.452 "dma_device_id": "system", 00:15:59.452 "dma_device_type": 1 00:15:59.452 }, 00:15:59.452 { 00:15:59.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.452 "dma_device_type": 2 00:15:59.452 } 00:15:59.452 ], 00:15:59.452 "driver_specific": {} 00:15:59.452 } 00:15:59.452 ] 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.452 "name": "Existed_Raid", 00:15:59.452 "uuid": "f34c89a0-280f-4975-8ed6-2624b6ecc7f3", 00:15:59.452 "strip_size_kb": 0, 00:15:59.452 "state": "configuring", 00:15:59.452 "raid_level": "raid1", 00:15:59.452 "superblock": true, 00:15:59.452 "num_base_bdevs": 2, 00:15:59.452 "num_base_bdevs_discovered": 1, 00:15:59.452 "num_base_bdevs_operational": 2, 00:15:59.452 "base_bdevs_list": [ 00:15:59.452 { 00:15:59.452 "name": "BaseBdev1", 00:15:59.452 "uuid": "c2260d75-eed0-40b1-aae6-3ee2435d1e66", 00:15:59.452 "is_configured": true, 00:15:59.452 "data_offset": 256, 00:15:59.452 "data_size": 7936 00:15:59.452 }, 00:15:59.452 { 00:15:59.452 "name": "BaseBdev2", 00:15:59.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.452 "is_configured": false, 00:15:59.452 "data_offset": 0, 00:15:59.452 "data_size": 0 00:15:59.452 } 00:15:59.452 ] 00:15:59.452 }' 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.452 23:11:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.070 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:00.070 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.070 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.070 [2024-11-18 23:11:19.156347] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.070 [2024-11-18 23:11:19.156384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:00.070 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.070 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:00.070 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.070 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.070 [2024-11-18 23:11:19.168385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.070 [2024-11-18 23:11:19.170252] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.070 [2024-11-18 23:11:19.170363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.070 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.070 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:00.070 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.071 "name": "Existed_Raid", 00:16:00.071 "uuid": "a4112524-1765-4ec6-8997-eb79745edf2b", 00:16:00.071 "strip_size_kb": 0, 00:16:00.071 "state": "configuring", 00:16:00.071 "raid_level": "raid1", 00:16:00.071 "superblock": true, 00:16:00.071 "num_base_bdevs": 2, 00:16:00.071 "num_base_bdevs_discovered": 1, 00:16:00.071 "num_base_bdevs_operational": 2, 00:16:00.071 "base_bdevs_list": [ 00:16:00.071 { 00:16:00.071 "name": "BaseBdev1", 00:16:00.071 "uuid": "c2260d75-eed0-40b1-aae6-3ee2435d1e66", 00:16:00.071 "is_configured": true, 00:16:00.071 "data_offset": 256, 00:16:00.071 "data_size": 7936 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "name": "BaseBdev2", 00:16:00.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.071 "is_configured": false, 00:16:00.071 "data_offset": 0, 00:16:00.071 "data_size": 0 00:16:00.071 } 00:16:00.071 ] 00:16:00.071 }' 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.071 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.330 [2024-11-18 23:11:19.654096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.330 [2024-11-18 23:11:19.654839] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:00.330 [2024-11-18 23:11:19.655012] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:00.330 BaseBdev2 00:16:00.330 [2024-11-18 23:11:19.655465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:00.330 [2024-11-18 23:11:19.655768] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:00.330 [2024-11-18 23:11:19.655831] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:00.330 [2024-11-18 23:11:19.656090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.330 [ 00:16:00.330 { 00:16:00.330 "name": "BaseBdev2", 00:16:00.330 "aliases": [ 00:16:00.330 "3d1b213f-fa30-48d5-937d-ef59abd9fca0" 00:16:00.330 ], 00:16:00.330 "product_name": "Malloc disk", 00:16:00.330 "block_size": 4096, 00:16:00.330 "num_blocks": 8192, 00:16:00.330 "uuid": "3d1b213f-fa30-48d5-937d-ef59abd9fca0", 00:16:00.330 "md_size": 32, 00:16:00.330 "md_interleave": false, 00:16:00.330 "dif_type": 0, 00:16:00.330 "assigned_rate_limits": { 00:16:00.330 "rw_ios_per_sec": 0, 00:16:00.330 "rw_mbytes_per_sec": 0, 00:16:00.330 "r_mbytes_per_sec": 0, 00:16:00.330 "w_mbytes_per_sec": 0 00:16:00.330 }, 00:16:00.330 "claimed": true, 00:16:00.330 "claim_type": "exclusive_write", 00:16:00.330 "zoned": false, 00:16:00.330 "supported_io_types": { 00:16:00.330 "read": true, 00:16:00.330 "write": true, 00:16:00.330 "unmap": true, 00:16:00.330 "flush": true, 00:16:00.330 "reset": true, 00:16:00.330 "nvme_admin": false, 00:16:00.330 "nvme_io": false, 00:16:00.330 "nvme_io_md": false, 00:16:00.330 "write_zeroes": true, 00:16:00.330 "zcopy": true, 00:16:00.330 "get_zone_info": false, 00:16:00.330 "zone_management": false, 00:16:00.330 "zone_append": false, 00:16:00.330 "compare": false, 00:16:00.330 "compare_and_write": false, 00:16:00.330 "abort": true, 00:16:00.330 "seek_hole": false, 00:16:00.330 "seek_data": false, 00:16:00.330 "copy": true, 00:16:00.330 "nvme_iov_md": false 00:16:00.330 }, 00:16:00.330 "memory_domains": [ 00:16:00.330 { 00:16:00.330 "dma_device_id": "system", 00:16:00.330 "dma_device_type": 1 00:16:00.330 }, 00:16:00.330 { 00:16:00.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.330 "dma_device_type": 2 00:16:00.330 } 00:16:00.330 ], 00:16:00.330 "driver_specific": {} 00:16:00.330 } 00:16:00.330 ] 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.330 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.600 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.600 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.600 "name": "Existed_Raid", 00:16:00.600 "uuid": "a4112524-1765-4ec6-8997-eb79745edf2b", 00:16:00.600 "strip_size_kb": 0, 00:16:00.600 "state": "online", 00:16:00.600 "raid_level": "raid1", 00:16:00.600 "superblock": true, 00:16:00.600 "num_base_bdevs": 2, 00:16:00.600 "num_base_bdevs_discovered": 2, 00:16:00.600 "num_base_bdevs_operational": 2, 00:16:00.600 "base_bdevs_list": [ 00:16:00.600 { 00:16:00.600 "name": "BaseBdev1", 00:16:00.600 "uuid": "c2260d75-eed0-40b1-aae6-3ee2435d1e66", 00:16:00.600 "is_configured": true, 00:16:00.600 "data_offset": 256, 00:16:00.600 "data_size": 7936 00:16:00.600 }, 00:16:00.600 { 00:16:00.600 "name": "BaseBdev2", 00:16:00.600 "uuid": "3d1b213f-fa30-48d5-937d-ef59abd9fca0", 00:16:00.600 "is_configured": true, 00:16:00.600 "data_offset": 256, 00:16:00.600 "data_size": 7936 00:16:00.600 } 00:16:00.600 ] 00:16:00.600 }' 00:16:00.600 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.600 23:11:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.860 [2024-11-18 23:11:20.145508] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.860 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.860 "name": "Existed_Raid", 00:16:00.860 "aliases": [ 00:16:00.860 "a4112524-1765-4ec6-8997-eb79745edf2b" 00:16:00.860 ], 00:16:00.860 "product_name": "Raid Volume", 00:16:00.860 "block_size": 4096, 00:16:00.860 "num_blocks": 7936, 00:16:00.860 "uuid": "a4112524-1765-4ec6-8997-eb79745edf2b", 00:16:00.860 "md_size": 32, 00:16:00.860 "md_interleave": false, 00:16:00.860 "dif_type": 0, 00:16:00.860 "assigned_rate_limits": { 00:16:00.860 "rw_ios_per_sec": 0, 00:16:00.860 "rw_mbytes_per_sec": 0, 00:16:00.860 "r_mbytes_per_sec": 0, 00:16:00.860 "w_mbytes_per_sec": 0 00:16:00.860 }, 00:16:00.860 "claimed": false, 00:16:00.860 "zoned": false, 00:16:00.860 "supported_io_types": { 00:16:00.860 "read": true, 00:16:00.860 "write": true, 00:16:00.860 "unmap": false, 00:16:00.860 "flush": false, 00:16:00.860 "reset": true, 00:16:00.860 "nvme_admin": false, 00:16:00.860 "nvme_io": false, 00:16:00.860 "nvme_io_md": false, 00:16:00.860 "write_zeroes": true, 00:16:00.860 "zcopy": false, 00:16:00.860 "get_zone_info": false, 00:16:00.860 "zone_management": false, 00:16:00.860 "zone_append": false, 00:16:00.860 "compare": false, 00:16:00.860 "compare_and_write": false, 00:16:00.860 "abort": false, 00:16:00.860 "seek_hole": false, 00:16:00.860 "seek_data": false, 00:16:00.860 "copy": false, 00:16:00.860 "nvme_iov_md": false 00:16:00.860 }, 00:16:00.860 "memory_domains": [ 00:16:00.860 { 00:16:00.861 "dma_device_id": "system", 00:16:00.861 "dma_device_type": 1 00:16:00.861 }, 00:16:00.861 { 00:16:00.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.861 "dma_device_type": 2 00:16:00.861 }, 00:16:00.861 { 00:16:00.861 "dma_device_id": "system", 00:16:00.861 "dma_device_type": 1 00:16:00.861 }, 00:16:00.861 { 00:16:00.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.861 "dma_device_type": 2 00:16:00.861 } 00:16:00.861 ], 00:16:00.861 "driver_specific": { 00:16:00.861 "raid": { 00:16:00.861 "uuid": "a4112524-1765-4ec6-8997-eb79745edf2b", 00:16:00.861 "strip_size_kb": 0, 00:16:00.861 "state": "online", 00:16:00.861 "raid_level": "raid1", 00:16:00.861 "superblock": true, 00:16:00.861 "num_base_bdevs": 2, 00:16:00.861 "num_base_bdevs_discovered": 2, 00:16:00.861 "num_base_bdevs_operational": 2, 00:16:00.861 "base_bdevs_list": [ 00:16:00.861 { 00:16:00.861 "name": "BaseBdev1", 00:16:00.861 "uuid": "c2260d75-eed0-40b1-aae6-3ee2435d1e66", 00:16:00.861 "is_configured": true, 00:16:00.861 "data_offset": 256, 00:16:00.861 "data_size": 7936 00:16:00.861 }, 00:16:00.861 { 00:16:00.861 "name": "BaseBdev2", 00:16:00.861 "uuid": "3d1b213f-fa30-48d5-937d-ef59abd9fca0", 00:16:00.861 "is_configured": true, 00:16:00.861 "data_offset": 256, 00:16:00.861 "data_size": 7936 00:16:00.861 } 00:16:00.861 ] 00:16:00.861 } 00:16:00.861 } 00:16:00.861 }' 00:16:00.861 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.861 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:00.861 BaseBdev2' 00:16:00.861 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.120 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.121 [2024-11-18 23:11:20.368902] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.121 "name": "Existed_Raid", 00:16:01.121 "uuid": "a4112524-1765-4ec6-8997-eb79745edf2b", 00:16:01.121 "strip_size_kb": 0, 00:16:01.121 "state": "online", 00:16:01.121 "raid_level": "raid1", 00:16:01.121 "superblock": true, 00:16:01.121 "num_base_bdevs": 2, 00:16:01.121 "num_base_bdevs_discovered": 1, 00:16:01.121 "num_base_bdevs_operational": 1, 00:16:01.121 "base_bdevs_list": [ 00:16:01.121 { 00:16:01.121 "name": null, 00:16:01.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.121 "is_configured": false, 00:16:01.121 "data_offset": 0, 00:16:01.121 "data_size": 7936 00:16:01.121 }, 00:16:01.121 { 00:16:01.121 "name": "BaseBdev2", 00:16:01.121 "uuid": "3d1b213f-fa30-48d5-937d-ef59abd9fca0", 00:16:01.121 "is_configured": true, 00:16:01.121 "data_offset": 256, 00:16:01.121 "data_size": 7936 00:16:01.121 } 00:16:01.121 ] 00:16:01.121 }' 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.121 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.692 [2024-11-18 23:11:20.832025] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:01.692 [2024-11-18 23:11:20.832122] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.692 [2024-11-18 23:11:20.844370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.692 [2024-11-18 23:11:20.844489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.692 [2024-11-18 23:11:20.844548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97495 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97495 ']' 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97495 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97495 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97495' 00:16:01.692 killing process with pid 97495 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97495 00:16:01.692 [2024-11-18 23:11:20.936244] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.692 23:11:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97495 00:16:01.692 [2024-11-18 23:11:20.937276] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.951 23:11:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:01.951 00:16:01.951 real 0m3.968s 00:16:01.951 user 0m6.187s 00:16:01.951 sys 0m0.884s 00:16:01.951 23:11:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:01.951 23:11:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.951 ************************************ 00:16:01.951 END TEST raid_state_function_test_sb_md_separate 00:16:01.951 ************************************ 00:16:01.951 23:11:21 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:01.951 23:11:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:01.951 23:11:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.951 23:11:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.951 ************************************ 00:16:01.951 START TEST raid_superblock_test_md_separate 00:16:01.951 ************************************ 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:01.951 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:01.952 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97736 00:16:01.952 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:01.952 23:11:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97736 00:16:01.952 23:11:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97736 ']' 00:16:01.952 23:11:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.952 23:11:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.952 23:11:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.952 23:11:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.952 23:11:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.211 [2024-11-18 23:11:21.356724] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:02.212 [2024-11-18 23:11:21.356942] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97736 ] 00:16:02.212 [2024-11-18 23:11:21.523774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.212 [2024-11-18 23:11:21.569810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.472 [2024-11-18 23:11:21.612214] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.472 [2024-11-18 23:11:21.612253] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.046 malloc1 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.046 [2024-11-18 23:11:22.206969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:03.046 [2024-11-18 23:11:22.207030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.046 [2024-11-18 23:11:22.207054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:03.046 [2024-11-18 23:11:22.207067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.046 [2024-11-18 23:11:22.209000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.046 [2024-11-18 23:11:22.209038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:03.046 pt1 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.046 malloc2 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.046 [2024-11-18 23:11:22.252842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.046 [2024-11-18 23:11:22.253062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.046 [2024-11-18 23:11:22.253188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:03.046 [2024-11-18 23:11:22.253348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.046 [2024-11-18 23:11:22.257749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.046 [2024-11-18 23:11:22.257916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.046 pt2 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.046 [2024-11-18 23:11:22.266264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:03.046 [2024-11-18 23:11:22.268943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.046 [2024-11-18 23:11:22.269164] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:03.046 [2024-11-18 23:11:22.269227] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:03.046 [2024-11-18 23:11:22.269378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:03.046 [2024-11-18 23:11:22.269542] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:03.046 [2024-11-18 23:11:22.269604] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:03.046 [2024-11-18 23:11:22.269781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.046 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.046 "name": "raid_bdev1", 00:16:03.046 "uuid": "4a22095f-1a1a-4d62-b9bb-470658cd825b", 00:16:03.046 "strip_size_kb": 0, 00:16:03.046 "state": "online", 00:16:03.046 "raid_level": "raid1", 00:16:03.046 "superblock": true, 00:16:03.046 "num_base_bdevs": 2, 00:16:03.046 "num_base_bdevs_discovered": 2, 00:16:03.046 "num_base_bdevs_operational": 2, 00:16:03.046 "base_bdevs_list": [ 00:16:03.046 { 00:16:03.046 "name": "pt1", 00:16:03.046 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.046 "is_configured": true, 00:16:03.046 "data_offset": 256, 00:16:03.046 "data_size": 7936 00:16:03.046 }, 00:16:03.046 { 00:16:03.046 "name": "pt2", 00:16:03.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.046 "is_configured": true, 00:16:03.046 "data_offset": 256, 00:16:03.046 "data_size": 7936 00:16:03.046 } 00:16:03.046 ] 00:16:03.046 }' 00:16:03.047 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.047 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.616 [2024-11-18 23:11:22.741665] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.616 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:03.616 "name": "raid_bdev1", 00:16:03.616 "aliases": [ 00:16:03.616 "4a22095f-1a1a-4d62-b9bb-470658cd825b" 00:16:03.616 ], 00:16:03.616 "product_name": "Raid Volume", 00:16:03.616 "block_size": 4096, 00:16:03.616 "num_blocks": 7936, 00:16:03.616 "uuid": "4a22095f-1a1a-4d62-b9bb-470658cd825b", 00:16:03.616 "md_size": 32, 00:16:03.616 "md_interleave": false, 00:16:03.616 "dif_type": 0, 00:16:03.616 "assigned_rate_limits": { 00:16:03.616 "rw_ios_per_sec": 0, 00:16:03.616 "rw_mbytes_per_sec": 0, 00:16:03.616 "r_mbytes_per_sec": 0, 00:16:03.616 "w_mbytes_per_sec": 0 00:16:03.616 }, 00:16:03.616 "claimed": false, 00:16:03.616 "zoned": false, 00:16:03.617 "supported_io_types": { 00:16:03.617 "read": true, 00:16:03.617 "write": true, 00:16:03.617 "unmap": false, 00:16:03.617 "flush": false, 00:16:03.617 "reset": true, 00:16:03.617 "nvme_admin": false, 00:16:03.617 "nvme_io": false, 00:16:03.617 "nvme_io_md": false, 00:16:03.617 "write_zeroes": true, 00:16:03.617 "zcopy": false, 00:16:03.617 "get_zone_info": false, 00:16:03.617 "zone_management": false, 00:16:03.617 "zone_append": false, 00:16:03.617 "compare": false, 00:16:03.617 "compare_and_write": false, 00:16:03.617 "abort": false, 00:16:03.617 "seek_hole": false, 00:16:03.617 "seek_data": false, 00:16:03.617 "copy": false, 00:16:03.617 "nvme_iov_md": false 00:16:03.617 }, 00:16:03.617 "memory_domains": [ 00:16:03.617 { 00:16:03.617 "dma_device_id": "system", 00:16:03.617 "dma_device_type": 1 00:16:03.617 }, 00:16:03.617 { 00:16:03.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.617 "dma_device_type": 2 00:16:03.617 }, 00:16:03.617 { 00:16:03.617 "dma_device_id": "system", 00:16:03.617 "dma_device_type": 1 00:16:03.617 }, 00:16:03.617 { 00:16:03.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.617 "dma_device_type": 2 00:16:03.617 } 00:16:03.617 ], 00:16:03.617 "driver_specific": { 00:16:03.617 "raid": { 00:16:03.617 "uuid": "4a22095f-1a1a-4d62-b9bb-470658cd825b", 00:16:03.617 "strip_size_kb": 0, 00:16:03.617 "state": "online", 00:16:03.617 "raid_level": "raid1", 00:16:03.617 "superblock": true, 00:16:03.617 "num_base_bdevs": 2, 00:16:03.617 "num_base_bdevs_discovered": 2, 00:16:03.617 "num_base_bdevs_operational": 2, 00:16:03.617 "base_bdevs_list": [ 00:16:03.617 { 00:16:03.617 "name": "pt1", 00:16:03.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.617 "is_configured": true, 00:16:03.617 "data_offset": 256, 00:16:03.617 "data_size": 7936 00:16:03.617 }, 00:16:03.617 { 00:16:03.617 "name": "pt2", 00:16:03.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.617 "is_configured": true, 00:16:03.617 "data_offset": 256, 00:16:03.617 "data_size": 7936 00:16:03.617 } 00:16:03.617 ] 00:16:03.617 } 00:16:03.617 } 00:16:03.617 }' 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:03.617 pt2' 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.617 [2024-11-18 23:11:22.965139] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.617 23:11:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4a22095f-1a1a-4d62-b9bb-470658cd825b 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 4a22095f-1a1a-4d62-b9bb-470658cd825b ']' 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.877 [2024-11-18 23:11:23.012865] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.877 [2024-11-18 23:11:23.012888] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.877 [2024-11-18 23:11:23.012959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.877 [2024-11-18 23:11:23.013011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.877 [2024-11-18 23:11:23.013019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:03.877 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.878 [2024-11-18 23:11:23.152638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:03.878 [2024-11-18 23:11:23.154483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:03.878 [2024-11-18 23:11:23.154602] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:03.878 [2024-11-18 23:11:23.154695] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:03.878 [2024-11-18 23:11:23.154757] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.878 [2024-11-18 23:11:23.154848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:03.878 request: 00:16:03.878 { 00:16:03.878 "name": "raid_bdev1", 00:16:03.878 "raid_level": "raid1", 00:16:03.878 "base_bdevs": [ 00:16:03.878 "malloc1", 00:16:03.878 "malloc2" 00:16:03.878 ], 00:16:03.878 "superblock": false, 00:16:03.878 "method": "bdev_raid_create", 00:16:03.878 "req_id": 1 00:16:03.878 } 00:16:03.878 Got JSON-RPC error response 00:16:03.878 response: 00:16:03.878 { 00:16:03.878 "code": -17, 00:16:03.878 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:03.878 } 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.878 [2024-11-18 23:11:23.220480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:03.878 [2024-11-18 23:11:23.220562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.878 [2024-11-18 23:11:23.220609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:03.878 [2024-11-18 23:11:23.220642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.878 [2024-11-18 23:11:23.222446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.878 [2024-11-18 23:11:23.222512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:03.878 [2024-11-18 23:11:23.222600] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:03.878 [2024-11-18 23:11:23.222671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:03.878 pt1 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.878 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.138 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.138 "name": "raid_bdev1", 00:16:04.138 "uuid": "4a22095f-1a1a-4d62-b9bb-470658cd825b", 00:16:04.138 "strip_size_kb": 0, 00:16:04.138 "state": "configuring", 00:16:04.138 "raid_level": "raid1", 00:16:04.138 "superblock": true, 00:16:04.138 "num_base_bdevs": 2, 00:16:04.138 "num_base_bdevs_discovered": 1, 00:16:04.138 "num_base_bdevs_operational": 2, 00:16:04.138 "base_bdevs_list": [ 00:16:04.138 { 00:16:04.138 "name": "pt1", 00:16:04.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.138 "is_configured": true, 00:16:04.138 "data_offset": 256, 00:16:04.138 "data_size": 7936 00:16:04.139 }, 00:16:04.139 { 00:16:04.139 "name": null, 00:16:04.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.139 "is_configured": false, 00:16:04.139 "data_offset": 256, 00:16:04.139 "data_size": 7936 00:16:04.139 } 00:16:04.139 ] 00:16:04.139 }' 00:16:04.139 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.139 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.403 [2024-11-18 23:11:23.631830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.403 [2024-11-18 23:11:23.631880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.403 [2024-11-18 23:11:23.631898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:04.403 [2024-11-18 23:11:23.631906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.403 [2024-11-18 23:11:23.632049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.403 [2024-11-18 23:11:23.632062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.403 [2024-11-18 23:11:23.632099] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.403 [2024-11-18 23:11:23.632116] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.403 [2024-11-18 23:11:23.632189] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:04.403 [2024-11-18 23:11:23.632197] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:04.403 [2024-11-18 23:11:23.632261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:04.403 [2024-11-18 23:11:23.632351] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:04.403 [2024-11-18 23:11:23.632365] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:04.403 [2024-11-18 23:11:23.632436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.403 pt2 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.403 "name": "raid_bdev1", 00:16:04.403 "uuid": "4a22095f-1a1a-4d62-b9bb-470658cd825b", 00:16:04.403 "strip_size_kb": 0, 00:16:04.403 "state": "online", 00:16:04.403 "raid_level": "raid1", 00:16:04.403 "superblock": true, 00:16:04.403 "num_base_bdevs": 2, 00:16:04.403 "num_base_bdevs_discovered": 2, 00:16:04.403 "num_base_bdevs_operational": 2, 00:16:04.403 "base_bdevs_list": [ 00:16:04.403 { 00:16:04.403 "name": "pt1", 00:16:04.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.403 "is_configured": true, 00:16:04.403 "data_offset": 256, 00:16:04.403 "data_size": 7936 00:16:04.403 }, 00:16:04.403 { 00:16:04.403 "name": "pt2", 00:16:04.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.403 "is_configured": true, 00:16:04.403 "data_offset": 256, 00:16:04.403 "data_size": 7936 00:16:04.403 } 00:16:04.403 ] 00:16:04.403 }' 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.403 23:11:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.973 [2024-11-18 23:11:24.071445] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.973 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.973 "name": "raid_bdev1", 00:16:04.973 "aliases": [ 00:16:04.973 "4a22095f-1a1a-4d62-b9bb-470658cd825b" 00:16:04.973 ], 00:16:04.973 "product_name": "Raid Volume", 00:16:04.973 "block_size": 4096, 00:16:04.973 "num_blocks": 7936, 00:16:04.973 "uuid": "4a22095f-1a1a-4d62-b9bb-470658cd825b", 00:16:04.973 "md_size": 32, 00:16:04.973 "md_interleave": false, 00:16:04.973 "dif_type": 0, 00:16:04.973 "assigned_rate_limits": { 00:16:04.973 "rw_ios_per_sec": 0, 00:16:04.973 "rw_mbytes_per_sec": 0, 00:16:04.973 "r_mbytes_per_sec": 0, 00:16:04.973 "w_mbytes_per_sec": 0 00:16:04.973 }, 00:16:04.973 "claimed": false, 00:16:04.973 "zoned": false, 00:16:04.973 "supported_io_types": { 00:16:04.973 "read": true, 00:16:04.973 "write": true, 00:16:04.973 "unmap": false, 00:16:04.973 "flush": false, 00:16:04.973 "reset": true, 00:16:04.973 "nvme_admin": false, 00:16:04.973 "nvme_io": false, 00:16:04.973 "nvme_io_md": false, 00:16:04.973 "write_zeroes": true, 00:16:04.973 "zcopy": false, 00:16:04.973 "get_zone_info": false, 00:16:04.973 "zone_management": false, 00:16:04.973 "zone_append": false, 00:16:04.973 "compare": false, 00:16:04.973 "compare_and_write": false, 00:16:04.973 "abort": false, 00:16:04.973 "seek_hole": false, 00:16:04.973 "seek_data": false, 00:16:04.973 "copy": false, 00:16:04.973 "nvme_iov_md": false 00:16:04.973 }, 00:16:04.973 "memory_domains": [ 00:16:04.973 { 00:16:04.973 "dma_device_id": "system", 00:16:04.973 "dma_device_type": 1 00:16:04.973 }, 00:16:04.973 { 00:16:04.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.973 "dma_device_type": 2 00:16:04.973 }, 00:16:04.973 { 00:16:04.973 "dma_device_id": "system", 00:16:04.973 "dma_device_type": 1 00:16:04.973 }, 00:16:04.973 { 00:16:04.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.973 "dma_device_type": 2 00:16:04.973 } 00:16:04.973 ], 00:16:04.973 "driver_specific": { 00:16:04.973 "raid": { 00:16:04.973 "uuid": "4a22095f-1a1a-4d62-b9bb-470658cd825b", 00:16:04.973 "strip_size_kb": 0, 00:16:04.973 "state": "online", 00:16:04.973 "raid_level": "raid1", 00:16:04.973 "superblock": true, 00:16:04.973 "num_base_bdevs": 2, 00:16:04.973 "num_base_bdevs_discovered": 2, 00:16:04.973 "num_base_bdevs_operational": 2, 00:16:04.973 "base_bdevs_list": [ 00:16:04.973 { 00:16:04.973 "name": "pt1", 00:16:04.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.973 "is_configured": true, 00:16:04.973 "data_offset": 256, 00:16:04.973 "data_size": 7936 00:16:04.973 }, 00:16:04.973 { 00:16:04.973 "name": "pt2", 00:16:04.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.973 "is_configured": true, 00:16:04.973 "data_offset": 256, 00:16:04.974 "data_size": 7936 00:16:04.974 } 00:16:04.974 ] 00:16:04.974 } 00:16:04.974 } 00:16:04.974 }' 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:04.974 pt2' 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.974 [2024-11-18 23:11:24.295006] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 4a22095f-1a1a-4d62-b9bb-470658cd825b '!=' 4a22095f-1a1a-4d62-b9bb-470658cd825b ']' 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.974 [2024-11-18 23:11:24.342712] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:04.974 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.232 "name": "raid_bdev1", 00:16:05.232 "uuid": "4a22095f-1a1a-4d62-b9bb-470658cd825b", 00:16:05.232 "strip_size_kb": 0, 00:16:05.232 "state": "online", 00:16:05.232 "raid_level": "raid1", 00:16:05.232 "superblock": true, 00:16:05.232 "num_base_bdevs": 2, 00:16:05.232 "num_base_bdevs_discovered": 1, 00:16:05.232 "num_base_bdevs_operational": 1, 00:16:05.232 "base_bdevs_list": [ 00:16:05.232 { 00:16:05.232 "name": null, 00:16:05.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.232 "is_configured": false, 00:16:05.232 "data_offset": 0, 00:16:05.232 "data_size": 7936 00:16:05.232 }, 00:16:05.232 { 00:16:05.232 "name": "pt2", 00:16:05.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.232 "is_configured": true, 00:16:05.232 "data_offset": 256, 00:16:05.232 "data_size": 7936 00:16:05.232 } 00:16:05.232 ] 00:16:05.232 }' 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.232 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.492 [2024-11-18 23:11:24.793894] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.492 [2024-11-18 23:11:24.793961] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.492 [2024-11-18 23:11:24.794061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.492 [2024-11-18 23:11:24.794134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.492 [2024-11-18 23:11:24.794186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.492 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.751 [2024-11-18 23:11:24.869771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:05.751 [2024-11-18 23:11:24.869814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.751 [2024-11-18 23:11:24.869829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:05.751 [2024-11-18 23:11:24.869837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.751 [2024-11-18 23:11:24.871810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.751 [2024-11-18 23:11:24.871895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:05.751 [2024-11-18 23:11:24.871953] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:05.751 [2024-11-18 23:11:24.871981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:05.751 [2024-11-18 23:11:24.872040] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:05.751 [2024-11-18 23:11:24.872048] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:05.751 [2024-11-18 23:11:24.872114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:05.751 [2024-11-18 23:11:24.872187] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:05.751 [2024-11-18 23:11:24.872196] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:05.751 [2024-11-18 23:11:24.872252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.751 pt2 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.751 "name": "raid_bdev1", 00:16:05.751 "uuid": "4a22095f-1a1a-4d62-b9bb-470658cd825b", 00:16:05.751 "strip_size_kb": 0, 00:16:05.751 "state": "online", 00:16:05.751 "raid_level": "raid1", 00:16:05.751 "superblock": true, 00:16:05.751 "num_base_bdevs": 2, 00:16:05.751 "num_base_bdevs_discovered": 1, 00:16:05.751 "num_base_bdevs_operational": 1, 00:16:05.751 "base_bdevs_list": [ 00:16:05.751 { 00:16:05.751 "name": null, 00:16:05.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.751 "is_configured": false, 00:16:05.751 "data_offset": 256, 00:16:05.751 "data_size": 7936 00:16:05.751 }, 00:16:05.751 { 00:16:05.751 "name": "pt2", 00:16:05.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.751 "is_configured": true, 00:16:05.751 "data_offset": 256, 00:16:05.751 "data_size": 7936 00:16:05.751 } 00:16:05.751 ] 00:16:05.751 }' 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.751 23:11:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.011 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.011 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.011 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.011 [2024-11-18 23:11:25.356944] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.011 [2024-11-18 23:11:25.357008] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.011 [2024-11-18 23:11:25.357091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.011 [2024-11-18 23:11:25.357156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.011 [2024-11-18 23:11:25.357226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:06.011 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.011 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.011 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:06.011 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.011 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.011 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.281 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.282 [2024-11-18 23:11:25.420828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:06.282 [2024-11-18 23:11:25.420920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.282 [2024-11-18 23:11:25.420966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:06.282 [2024-11-18 23:11:25.421004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.282 [2024-11-18 23:11:25.422884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.282 [2024-11-18 23:11:25.422958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:06.282 [2024-11-18 23:11:25.423031] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:06.282 [2024-11-18 23:11:25.423096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:06.282 [2024-11-18 23:11:25.423229] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:06.282 [2024-11-18 23:11:25.423309] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.282 [2024-11-18 23:11:25.423374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:06.282 [2024-11-18 23:11:25.423484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:06.282 [2024-11-18 23:11:25.423588] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:06.282 [2024-11-18 23:11:25.423632] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:06.282 [2024-11-18 23:11:25.423727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:06.282 [2024-11-18 23:11:25.423840] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:06.282 [2024-11-18 23:11:25.423879] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:06.282 [2024-11-18 23:11:25.424008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.282 pt1 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.282 "name": "raid_bdev1", 00:16:06.282 "uuid": "4a22095f-1a1a-4d62-b9bb-470658cd825b", 00:16:06.282 "strip_size_kb": 0, 00:16:06.282 "state": "online", 00:16:06.282 "raid_level": "raid1", 00:16:06.282 "superblock": true, 00:16:06.282 "num_base_bdevs": 2, 00:16:06.282 "num_base_bdevs_discovered": 1, 00:16:06.282 "num_base_bdevs_operational": 1, 00:16:06.282 "base_bdevs_list": [ 00:16:06.282 { 00:16:06.282 "name": null, 00:16:06.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.282 "is_configured": false, 00:16:06.282 "data_offset": 256, 00:16:06.282 "data_size": 7936 00:16:06.282 }, 00:16:06.282 { 00:16:06.282 "name": "pt2", 00:16:06.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.282 "is_configured": true, 00:16:06.282 "data_offset": 256, 00:16:06.282 "data_size": 7936 00:16:06.282 } 00:16:06.282 ] 00:16:06.282 }' 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.282 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.560 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:06.560 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:06.560 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.560 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.560 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.560 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:06.560 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:06.560 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:06.560 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.560 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.560 [2024-11-18 23:11:25.904218] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.560 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 4a22095f-1a1a-4d62-b9bb-470658cd825b '!=' 4a22095f-1a1a-4d62-b9bb-470658cd825b ']' 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97736 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97736 ']' 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97736 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97736 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97736' 00:16:06.819 killing process with pid 97736 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97736 00:16:06.819 [2024-11-18 23:11:25.986308] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.819 [2024-11-18 23:11:25.986381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.819 [2024-11-18 23:11:25.986420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.819 [2024-11-18 23:11:25.986428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:06.819 23:11:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97736 00:16:06.819 [2024-11-18 23:11:26.009824] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.079 23:11:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:07.079 00:16:07.079 real 0m4.993s 00:16:07.079 user 0m8.108s 00:16:07.079 sys 0m1.123s 00:16:07.079 23:11:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.079 23:11:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.079 ************************************ 00:16:07.079 END TEST raid_superblock_test_md_separate 00:16:07.079 ************************************ 00:16:07.079 23:11:26 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:07.079 23:11:26 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:07.079 23:11:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:07.079 23:11:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:07.079 23:11:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.079 ************************************ 00:16:07.079 START TEST raid_rebuild_test_sb_md_separate 00:16:07.079 ************************************ 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:07.079 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98048 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98048 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98048 ']' 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.080 23:11:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.080 [2024-11-18 23:11:26.441330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:07.080 [2024-11-18 23:11:26.441515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:07.080 Zero copy mechanism will not be used. 00:16:07.080 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98048 ] 00:16:07.339 [2024-11-18 23:11:26.602507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.339 [2024-11-18 23:11:26.648416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.339 [2024-11-18 23:11:26.690428] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.339 [2024-11-18 23:11:26.690545] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.909 BaseBdev1_malloc 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.909 [2024-11-18 23:11:27.273164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:07.909 [2024-11-18 23:11:27.273223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.909 [2024-11-18 23:11:27.273256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:07.909 [2024-11-18 23:11:27.273265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.909 [2024-11-18 23:11:27.275127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.909 [2024-11-18 23:11:27.275164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:07.909 BaseBdev1 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.909 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.169 BaseBdev2_malloc 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.169 [2024-11-18 23:11:27.315823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:08.169 [2024-11-18 23:11:27.315922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.169 [2024-11-18 23:11:27.315965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:08.169 [2024-11-18 23:11:27.315985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.169 [2024-11-18 23:11:27.320103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.169 [2024-11-18 23:11:27.320266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:08.169 BaseBdev2 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.169 spare_malloc 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.169 spare_delay 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.169 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.169 [2024-11-18 23:11:27.359013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:08.169 [2024-11-18 23:11:27.359128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.169 [2024-11-18 23:11:27.359156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:08.170 [2024-11-18 23:11:27.359167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.170 [2024-11-18 23:11:27.361025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.170 [2024-11-18 23:11:27.361060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:08.170 spare 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.170 [2024-11-18 23:11:27.371013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.170 [2024-11-18 23:11:27.372754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.170 [2024-11-18 23:11:27.372906] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:08.170 [2024-11-18 23:11:27.372917] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:08.170 [2024-11-18 23:11:27.372989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:08.170 [2024-11-18 23:11:27.373073] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:08.170 [2024-11-18 23:11:27.373082] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:08.170 [2024-11-18 23:11:27.373150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.170 "name": "raid_bdev1", 00:16:08.170 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:08.170 "strip_size_kb": 0, 00:16:08.170 "state": "online", 00:16:08.170 "raid_level": "raid1", 00:16:08.170 "superblock": true, 00:16:08.170 "num_base_bdevs": 2, 00:16:08.170 "num_base_bdevs_discovered": 2, 00:16:08.170 "num_base_bdevs_operational": 2, 00:16:08.170 "base_bdevs_list": [ 00:16:08.170 { 00:16:08.170 "name": "BaseBdev1", 00:16:08.170 "uuid": "41d238a8-2094-56c5-b121-c83b6574916c", 00:16:08.170 "is_configured": true, 00:16:08.170 "data_offset": 256, 00:16:08.170 "data_size": 7936 00:16:08.170 }, 00:16:08.170 { 00:16:08.170 "name": "BaseBdev2", 00:16:08.170 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:08.170 "is_configured": true, 00:16:08.170 "data_offset": 256, 00:16:08.170 "data_size": 7936 00:16:08.170 } 00:16:08.170 ] 00:16:08.170 }' 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.170 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.429 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.429 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.429 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.429 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:08.429 [2024-11-18 23:11:27.802570] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:08.688 23:11:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:08.949 [2024-11-18 23:11:28.077844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:08.949 /dev/nbd0 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.949 1+0 records in 00:16:08.949 1+0 records out 00:16:08.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053256 s, 7.7 MB/s 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:08.949 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:09.518 7936+0 records in 00:16:09.518 7936+0 records out 00:16:09.518 32505856 bytes (33 MB, 31 MiB) copied, 0.616468 s, 52.7 MB/s 00:16:09.518 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:09.518 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.518 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:09.518 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:09.518 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:09.518 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:09.518 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:09.777 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:09.777 [2024-11-18 23:11:28.997252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.777 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:09.778 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:09.778 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:09.778 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:09.778 23:11:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.778 [2024-11-18 23:11:29.013810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.778 "name": "raid_bdev1", 00:16:09.778 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:09.778 "strip_size_kb": 0, 00:16:09.778 "state": "online", 00:16:09.778 "raid_level": "raid1", 00:16:09.778 "superblock": true, 00:16:09.778 "num_base_bdevs": 2, 00:16:09.778 "num_base_bdevs_discovered": 1, 00:16:09.778 "num_base_bdevs_operational": 1, 00:16:09.778 "base_bdevs_list": [ 00:16:09.778 { 00:16:09.778 "name": null, 00:16:09.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.778 "is_configured": false, 00:16:09.778 "data_offset": 0, 00:16:09.778 "data_size": 7936 00:16:09.778 }, 00:16:09.778 { 00:16:09.778 "name": "BaseBdev2", 00:16:09.778 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:09.778 "is_configured": true, 00:16:09.778 "data_offset": 256, 00:16:09.778 "data_size": 7936 00:16:09.778 } 00:16:09.778 ] 00:16:09.778 }' 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.778 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.344 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:10.344 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.344 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.344 [2024-11-18 23:11:29.492963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.344 [2024-11-18 23:11:29.494807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:10.344 [2024-11-18 23:11:29.496682] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.344 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.344 23:11:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.280 "name": "raid_bdev1", 00:16:11.280 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:11.280 "strip_size_kb": 0, 00:16:11.280 "state": "online", 00:16:11.280 "raid_level": "raid1", 00:16:11.280 "superblock": true, 00:16:11.280 "num_base_bdevs": 2, 00:16:11.280 "num_base_bdevs_discovered": 2, 00:16:11.280 "num_base_bdevs_operational": 2, 00:16:11.280 "process": { 00:16:11.280 "type": "rebuild", 00:16:11.280 "target": "spare", 00:16:11.280 "progress": { 00:16:11.280 "blocks": 2560, 00:16:11.280 "percent": 32 00:16:11.280 } 00:16:11.280 }, 00:16:11.280 "base_bdevs_list": [ 00:16:11.280 { 00:16:11.280 "name": "spare", 00:16:11.280 "uuid": "91605b71-8e8d-5611-aa18-55a0a62e0c52", 00:16:11.280 "is_configured": true, 00:16:11.280 "data_offset": 256, 00:16:11.280 "data_size": 7936 00:16:11.280 }, 00:16:11.280 { 00:16:11.280 "name": "BaseBdev2", 00:16:11.280 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:11.280 "is_configured": true, 00:16:11.280 "data_offset": 256, 00:16:11.280 "data_size": 7936 00:16:11.280 } 00:16:11.280 ] 00:16:11.280 }' 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.280 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.539 [2024-11-18 23:11:30.660339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.539 [2024-11-18 23:11:30.701176] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:11.539 [2024-11-18 23:11:30.701228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.539 [2024-11-18 23:11:30.701245] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.539 [2024-11-18 23:11:30.701252] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.539 "name": "raid_bdev1", 00:16:11.539 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:11.539 "strip_size_kb": 0, 00:16:11.539 "state": "online", 00:16:11.539 "raid_level": "raid1", 00:16:11.539 "superblock": true, 00:16:11.539 "num_base_bdevs": 2, 00:16:11.539 "num_base_bdevs_discovered": 1, 00:16:11.539 "num_base_bdevs_operational": 1, 00:16:11.539 "base_bdevs_list": [ 00:16:11.539 { 00:16:11.539 "name": null, 00:16:11.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.539 "is_configured": false, 00:16:11.539 "data_offset": 0, 00:16:11.539 "data_size": 7936 00:16:11.539 }, 00:16:11.539 { 00:16:11.539 "name": "BaseBdev2", 00:16:11.539 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:11.539 "is_configured": true, 00:16:11.539 "data_offset": 256, 00:16:11.539 "data_size": 7936 00:16:11.539 } 00:16:11.539 ] 00:16:11.539 }' 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.539 23:11:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.799 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.799 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.799 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.799 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.799 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.799 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.799 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.799 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.799 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.799 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.799 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.799 "name": "raid_bdev1", 00:16:11.799 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:11.799 "strip_size_kb": 0, 00:16:11.799 "state": "online", 00:16:11.799 "raid_level": "raid1", 00:16:11.799 "superblock": true, 00:16:11.799 "num_base_bdevs": 2, 00:16:11.800 "num_base_bdevs_discovered": 1, 00:16:11.800 "num_base_bdevs_operational": 1, 00:16:11.800 "base_bdevs_list": [ 00:16:11.800 { 00:16:11.800 "name": null, 00:16:11.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.800 "is_configured": false, 00:16:11.800 "data_offset": 0, 00:16:11.800 "data_size": 7936 00:16:11.800 }, 00:16:11.800 { 00:16:11.800 "name": "BaseBdev2", 00:16:11.800 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:11.800 "is_configured": true, 00:16:11.800 "data_offset": 256, 00:16:11.800 "data_size": 7936 00:16:11.800 } 00:16:11.800 ] 00:16:11.800 }' 00:16:12.060 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.060 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.060 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.060 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.060 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.060 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.060 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.060 [2024-11-18 23:11:31.279361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.060 [2024-11-18 23:11:31.280835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:12.060 [2024-11-18 23:11:31.282643] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.060 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.060 23:11:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.998 "name": "raid_bdev1", 00:16:12.998 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:12.998 "strip_size_kb": 0, 00:16:12.998 "state": "online", 00:16:12.998 "raid_level": "raid1", 00:16:12.998 "superblock": true, 00:16:12.998 "num_base_bdevs": 2, 00:16:12.998 "num_base_bdevs_discovered": 2, 00:16:12.998 "num_base_bdevs_operational": 2, 00:16:12.998 "process": { 00:16:12.998 "type": "rebuild", 00:16:12.998 "target": "spare", 00:16:12.998 "progress": { 00:16:12.998 "blocks": 2560, 00:16:12.998 "percent": 32 00:16:12.998 } 00:16:12.998 }, 00:16:12.998 "base_bdevs_list": [ 00:16:12.998 { 00:16:12.998 "name": "spare", 00:16:12.998 "uuid": "91605b71-8e8d-5611-aa18-55a0a62e0c52", 00:16:12.998 "is_configured": true, 00:16:12.998 "data_offset": 256, 00:16:12.998 "data_size": 7936 00:16:12.998 }, 00:16:12.998 { 00:16:12.998 "name": "BaseBdev2", 00:16:12.998 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:12.998 "is_configured": true, 00:16:12.998 "data_offset": 256, 00:16:12.998 "data_size": 7936 00:16:12.998 } 00:16:12.998 ] 00:16:12.998 }' 00:16:12.998 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:13.258 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=588 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.258 "name": "raid_bdev1", 00:16:13.258 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:13.258 "strip_size_kb": 0, 00:16:13.258 "state": "online", 00:16:13.258 "raid_level": "raid1", 00:16:13.258 "superblock": true, 00:16:13.258 "num_base_bdevs": 2, 00:16:13.258 "num_base_bdevs_discovered": 2, 00:16:13.258 "num_base_bdevs_operational": 2, 00:16:13.258 "process": { 00:16:13.258 "type": "rebuild", 00:16:13.258 "target": "spare", 00:16:13.258 "progress": { 00:16:13.258 "blocks": 2816, 00:16:13.258 "percent": 35 00:16:13.258 } 00:16:13.258 }, 00:16:13.258 "base_bdevs_list": [ 00:16:13.258 { 00:16:13.258 "name": "spare", 00:16:13.258 "uuid": "91605b71-8e8d-5611-aa18-55a0a62e0c52", 00:16:13.258 "is_configured": true, 00:16:13.258 "data_offset": 256, 00:16:13.258 "data_size": 7936 00:16:13.258 }, 00:16:13.258 { 00:16:13.258 "name": "BaseBdev2", 00:16:13.258 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:13.258 "is_configured": true, 00:16:13.258 "data_offset": 256, 00:16:13.258 "data_size": 7936 00:16:13.258 } 00:16:13.258 ] 00:16:13.258 }' 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.258 23:11:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.639 "name": "raid_bdev1", 00:16:14.639 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:14.639 "strip_size_kb": 0, 00:16:14.639 "state": "online", 00:16:14.639 "raid_level": "raid1", 00:16:14.639 "superblock": true, 00:16:14.639 "num_base_bdevs": 2, 00:16:14.639 "num_base_bdevs_discovered": 2, 00:16:14.639 "num_base_bdevs_operational": 2, 00:16:14.639 "process": { 00:16:14.639 "type": "rebuild", 00:16:14.639 "target": "spare", 00:16:14.639 "progress": { 00:16:14.639 "blocks": 5888, 00:16:14.639 "percent": 74 00:16:14.639 } 00:16:14.639 }, 00:16:14.639 "base_bdevs_list": [ 00:16:14.639 { 00:16:14.639 "name": "spare", 00:16:14.639 "uuid": "91605b71-8e8d-5611-aa18-55a0a62e0c52", 00:16:14.639 "is_configured": true, 00:16:14.639 "data_offset": 256, 00:16:14.639 "data_size": 7936 00:16:14.639 }, 00:16:14.639 { 00:16:14.639 "name": "BaseBdev2", 00:16:14.639 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:14.639 "is_configured": true, 00:16:14.639 "data_offset": 256, 00:16:14.639 "data_size": 7936 00:16:14.639 } 00:16:14.639 ] 00:16:14.639 }' 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.639 23:11:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.208 [2024-11-18 23:11:34.392687] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:15.208 [2024-11-18 23:11:34.392757] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:15.208 [2024-11-18 23:11:34.392844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.468 "name": "raid_bdev1", 00:16:15.468 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:15.468 "strip_size_kb": 0, 00:16:15.468 "state": "online", 00:16:15.468 "raid_level": "raid1", 00:16:15.468 "superblock": true, 00:16:15.468 "num_base_bdevs": 2, 00:16:15.468 "num_base_bdevs_discovered": 2, 00:16:15.468 "num_base_bdevs_operational": 2, 00:16:15.468 "base_bdevs_list": [ 00:16:15.468 { 00:16:15.468 "name": "spare", 00:16:15.468 "uuid": "91605b71-8e8d-5611-aa18-55a0a62e0c52", 00:16:15.468 "is_configured": true, 00:16:15.468 "data_offset": 256, 00:16:15.468 "data_size": 7936 00:16:15.468 }, 00:16:15.468 { 00:16:15.468 "name": "BaseBdev2", 00:16:15.468 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:15.468 "is_configured": true, 00:16:15.468 "data_offset": 256, 00:16:15.468 "data_size": 7936 00:16:15.468 } 00:16:15.468 ] 00:16:15.468 }' 00:16:15.468 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.728 "name": "raid_bdev1", 00:16:15.728 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:15.728 "strip_size_kb": 0, 00:16:15.728 "state": "online", 00:16:15.728 "raid_level": "raid1", 00:16:15.728 "superblock": true, 00:16:15.728 "num_base_bdevs": 2, 00:16:15.728 "num_base_bdevs_discovered": 2, 00:16:15.728 "num_base_bdevs_operational": 2, 00:16:15.728 "base_bdevs_list": [ 00:16:15.728 { 00:16:15.728 "name": "spare", 00:16:15.728 "uuid": "91605b71-8e8d-5611-aa18-55a0a62e0c52", 00:16:15.728 "is_configured": true, 00:16:15.728 "data_offset": 256, 00:16:15.728 "data_size": 7936 00:16:15.728 }, 00:16:15.728 { 00:16:15.728 "name": "BaseBdev2", 00:16:15.728 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:15.728 "is_configured": true, 00:16:15.728 "data_offset": 256, 00:16:15.728 "data_size": 7936 00:16:15.728 } 00:16:15.728 ] 00:16:15.728 }' 00:16:15.728 23:11:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.728 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.728 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.728 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.728 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:15.728 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.728 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.728 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.729 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.729 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.729 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.729 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.729 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.729 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.729 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.729 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.729 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.729 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.729 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.988 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.988 "name": "raid_bdev1", 00:16:15.988 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:15.988 "strip_size_kb": 0, 00:16:15.988 "state": "online", 00:16:15.988 "raid_level": "raid1", 00:16:15.988 "superblock": true, 00:16:15.988 "num_base_bdevs": 2, 00:16:15.988 "num_base_bdevs_discovered": 2, 00:16:15.988 "num_base_bdevs_operational": 2, 00:16:15.988 "base_bdevs_list": [ 00:16:15.988 { 00:16:15.988 "name": "spare", 00:16:15.988 "uuid": "91605b71-8e8d-5611-aa18-55a0a62e0c52", 00:16:15.988 "is_configured": true, 00:16:15.988 "data_offset": 256, 00:16:15.988 "data_size": 7936 00:16:15.988 }, 00:16:15.988 { 00:16:15.988 "name": "BaseBdev2", 00:16:15.988 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:15.988 "is_configured": true, 00:16:15.988 "data_offset": 256, 00:16:15.988 "data_size": 7936 00:16:15.988 } 00:16:15.988 ] 00:16:15.988 }' 00:16:15.988 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.988 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.247 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.247 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.247 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.247 [2024-11-18 23:11:35.498942] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.247 [2024-11-18 23:11:35.498971] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.247 [2024-11-18 23:11:35.499041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.247 [2024-11-18 23:11:35.499098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.247 [2024-11-18 23:11:35.499111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:16.247 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.247 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.247 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:16.248 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:16.508 /dev/nbd0 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.508 1+0 records in 00:16:16.508 1+0 records out 00:16:16.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329927 s, 12.4 MB/s 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:16.508 23:11:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:16.789 /dev/nbd1 00:16:16.789 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:16.789 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:16.789 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:16.789 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:16.789 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:16.789 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:16.789 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:16.789 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:16.789 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:16.789 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:16.789 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.789 1+0 records in 00:16:16.790 1+0 records out 00:16:16.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136631 s, 3.0 MB/s 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.790 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:17.052 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:17.052 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:17.052 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:17.052 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:17.052 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:17.052 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:17.052 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:17.052 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:17.052 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.052 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.312 [2024-11-18 23:11:36.619525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:17.312 [2024-11-18 23:11:36.619580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.312 [2024-11-18 23:11:36.619598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:17.312 [2024-11-18 23:11:36.619610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.312 [2024-11-18 23:11:36.621510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.312 [2024-11-18 23:11:36.621549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:17.312 [2024-11-18 23:11:36.621597] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:17.312 [2024-11-18 23:11:36.621632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.312 [2024-11-18 23:11:36.621732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.312 spare 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.312 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.574 [2024-11-18 23:11:36.721627] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:17.574 [2024-11-18 23:11:36.721652] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:17.574 [2024-11-18 23:11:36.721748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:17.574 [2024-11-18 23:11:36.721845] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:17.574 [2024-11-18 23:11:36.721856] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:17.574 [2024-11-18 23:11:36.721945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.574 "name": "raid_bdev1", 00:16:17.574 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:17.574 "strip_size_kb": 0, 00:16:17.574 "state": "online", 00:16:17.574 "raid_level": "raid1", 00:16:17.574 "superblock": true, 00:16:17.574 "num_base_bdevs": 2, 00:16:17.574 "num_base_bdevs_discovered": 2, 00:16:17.574 "num_base_bdevs_operational": 2, 00:16:17.574 "base_bdevs_list": [ 00:16:17.574 { 00:16:17.574 "name": "spare", 00:16:17.574 "uuid": "91605b71-8e8d-5611-aa18-55a0a62e0c52", 00:16:17.574 "is_configured": true, 00:16:17.574 "data_offset": 256, 00:16:17.574 "data_size": 7936 00:16:17.574 }, 00:16:17.574 { 00:16:17.574 "name": "BaseBdev2", 00:16:17.574 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:17.574 "is_configured": true, 00:16:17.574 "data_offset": 256, 00:16:17.574 "data_size": 7936 00:16:17.574 } 00:16:17.574 ] 00:16:17.574 }' 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.574 23:11:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.833 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.833 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.833 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.833 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.833 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.833 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.833 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.092 "name": "raid_bdev1", 00:16:18.092 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:18.092 "strip_size_kb": 0, 00:16:18.092 "state": "online", 00:16:18.092 "raid_level": "raid1", 00:16:18.092 "superblock": true, 00:16:18.092 "num_base_bdevs": 2, 00:16:18.092 "num_base_bdevs_discovered": 2, 00:16:18.092 "num_base_bdevs_operational": 2, 00:16:18.092 "base_bdevs_list": [ 00:16:18.092 { 00:16:18.092 "name": "spare", 00:16:18.092 "uuid": "91605b71-8e8d-5611-aa18-55a0a62e0c52", 00:16:18.092 "is_configured": true, 00:16:18.092 "data_offset": 256, 00:16:18.092 "data_size": 7936 00:16:18.092 }, 00:16:18.092 { 00:16:18.092 "name": "BaseBdev2", 00:16:18.092 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:18.092 "is_configured": true, 00:16:18.092 "data_offset": 256, 00:16:18.092 "data_size": 7936 00:16:18.092 } 00:16:18.092 ] 00:16:18.092 }' 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.092 [2024-11-18 23:11:37.354371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.092 "name": "raid_bdev1", 00:16:18.092 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:18.092 "strip_size_kb": 0, 00:16:18.092 "state": "online", 00:16:18.092 "raid_level": "raid1", 00:16:18.092 "superblock": true, 00:16:18.092 "num_base_bdevs": 2, 00:16:18.092 "num_base_bdevs_discovered": 1, 00:16:18.092 "num_base_bdevs_operational": 1, 00:16:18.092 "base_bdevs_list": [ 00:16:18.092 { 00:16:18.092 "name": null, 00:16:18.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.092 "is_configured": false, 00:16:18.092 "data_offset": 0, 00:16:18.092 "data_size": 7936 00:16:18.092 }, 00:16:18.092 { 00:16:18.092 "name": "BaseBdev2", 00:16:18.092 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:18.092 "is_configured": true, 00:16:18.092 "data_offset": 256, 00:16:18.092 "data_size": 7936 00:16:18.092 } 00:16:18.092 ] 00:16:18.092 }' 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.092 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.660 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:18.660 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.660 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.660 [2024-11-18 23:11:37.781651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.660 [2024-11-18 23:11:37.781780] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:18.660 [2024-11-18 23:11:37.781802] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:18.660 [2024-11-18 23:11:37.781835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.660 [2024-11-18 23:11:37.783394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:18.660 [2024-11-18 23:11:37.785162] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:18.660 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.660 23:11:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.599 "name": "raid_bdev1", 00:16:19.599 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:19.599 "strip_size_kb": 0, 00:16:19.599 "state": "online", 00:16:19.599 "raid_level": "raid1", 00:16:19.599 "superblock": true, 00:16:19.599 "num_base_bdevs": 2, 00:16:19.599 "num_base_bdevs_discovered": 2, 00:16:19.599 "num_base_bdevs_operational": 2, 00:16:19.599 "process": { 00:16:19.599 "type": "rebuild", 00:16:19.599 "target": "spare", 00:16:19.599 "progress": { 00:16:19.599 "blocks": 2560, 00:16:19.599 "percent": 32 00:16:19.599 } 00:16:19.599 }, 00:16:19.599 "base_bdevs_list": [ 00:16:19.599 { 00:16:19.599 "name": "spare", 00:16:19.599 "uuid": "91605b71-8e8d-5611-aa18-55a0a62e0c52", 00:16:19.599 "is_configured": true, 00:16:19.599 "data_offset": 256, 00:16:19.599 "data_size": 7936 00:16:19.599 }, 00:16:19.599 { 00:16:19.599 "name": "BaseBdev2", 00:16:19.599 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:19.599 "is_configured": true, 00:16:19.599 "data_offset": 256, 00:16:19.599 "data_size": 7936 00:16:19.599 } 00:16:19.599 ] 00:16:19.599 }' 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.599 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.600 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.600 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.600 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:19.600 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.600 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.600 [2024-11-18 23:11:38.952320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.860 [2024-11-18 23:11:38.989065] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:19.860 [2024-11-18 23:11:38.989115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.860 [2024-11-18 23:11:38.989130] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.860 [2024-11-18 23:11:38.989137] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:19.860 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.860 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:19.860 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.860 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.860 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.860 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.860 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:19.861 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.861 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.861 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.861 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.861 23:11:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.861 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.861 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.861 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.861 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.861 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.861 "name": "raid_bdev1", 00:16:19.861 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:19.861 "strip_size_kb": 0, 00:16:19.861 "state": "online", 00:16:19.861 "raid_level": "raid1", 00:16:19.861 "superblock": true, 00:16:19.861 "num_base_bdevs": 2, 00:16:19.861 "num_base_bdevs_discovered": 1, 00:16:19.861 "num_base_bdevs_operational": 1, 00:16:19.861 "base_bdevs_list": [ 00:16:19.861 { 00:16:19.861 "name": null, 00:16:19.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.861 "is_configured": false, 00:16:19.861 "data_offset": 0, 00:16:19.861 "data_size": 7936 00:16:19.861 }, 00:16:19.861 { 00:16:19.861 "name": "BaseBdev2", 00:16:19.861 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:19.861 "is_configured": true, 00:16:19.861 "data_offset": 256, 00:16:19.861 "data_size": 7936 00:16:19.861 } 00:16:19.861 ] 00:16:19.861 }' 00:16:19.861 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.861 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.121 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:20.121 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.121 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.121 [2024-11-18 23:11:39.448212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:20.122 [2024-11-18 23:11:39.448267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.122 [2024-11-18 23:11:39.448299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:20.122 [2024-11-18 23:11:39.448310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.122 [2024-11-18 23:11:39.448513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.122 [2024-11-18 23:11:39.448540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:20.122 [2024-11-18 23:11:39.448597] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:20.122 [2024-11-18 23:11:39.448609] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:20.122 [2024-11-18 23:11:39.448624] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:20.122 [2024-11-18 23:11:39.448667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.122 [2024-11-18 23:11:39.450029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:20.122 [2024-11-18 23:11:39.451831] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:20.122 spare 00:16:20.122 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.122 23:11:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:21.501 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.501 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.501 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.501 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.502 "name": "raid_bdev1", 00:16:21.502 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:21.502 "strip_size_kb": 0, 00:16:21.502 "state": "online", 00:16:21.502 "raid_level": "raid1", 00:16:21.502 "superblock": true, 00:16:21.502 "num_base_bdevs": 2, 00:16:21.502 "num_base_bdevs_discovered": 2, 00:16:21.502 "num_base_bdevs_operational": 2, 00:16:21.502 "process": { 00:16:21.502 "type": "rebuild", 00:16:21.502 "target": "spare", 00:16:21.502 "progress": { 00:16:21.502 "blocks": 2560, 00:16:21.502 "percent": 32 00:16:21.502 } 00:16:21.502 }, 00:16:21.502 "base_bdevs_list": [ 00:16:21.502 { 00:16:21.502 "name": "spare", 00:16:21.502 "uuid": "91605b71-8e8d-5611-aa18-55a0a62e0c52", 00:16:21.502 "is_configured": true, 00:16:21.502 "data_offset": 256, 00:16:21.502 "data_size": 7936 00:16:21.502 }, 00:16:21.502 { 00:16:21.502 "name": "BaseBdev2", 00:16:21.502 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:21.502 "is_configured": true, 00:16:21.502 "data_offset": 256, 00:16:21.502 "data_size": 7936 00:16:21.502 } 00:16:21.502 ] 00:16:21.502 }' 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.502 [2024-11-18 23:11:40.590871] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.502 [2024-11-18 23:11:40.655644] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:21.502 [2024-11-18 23:11:40.655697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.502 [2024-11-18 23:11:40.655710] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.502 [2024-11-18 23:11:40.655718] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.502 "name": "raid_bdev1", 00:16:21.502 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:21.502 "strip_size_kb": 0, 00:16:21.502 "state": "online", 00:16:21.502 "raid_level": "raid1", 00:16:21.502 "superblock": true, 00:16:21.502 "num_base_bdevs": 2, 00:16:21.502 "num_base_bdevs_discovered": 1, 00:16:21.502 "num_base_bdevs_operational": 1, 00:16:21.502 "base_bdevs_list": [ 00:16:21.502 { 00:16:21.502 "name": null, 00:16:21.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.502 "is_configured": false, 00:16:21.502 "data_offset": 0, 00:16:21.502 "data_size": 7936 00:16:21.502 }, 00:16:21.502 { 00:16:21.502 "name": "BaseBdev2", 00:16:21.502 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:21.502 "is_configured": true, 00:16:21.502 "data_offset": 256, 00:16:21.502 "data_size": 7936 00:16:21.502 } 00:16:21.502 ] 00:16:21.502 }' 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.502 23:11:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.071 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.071 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.071 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.071 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.071 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.071 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.071 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.071 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.071 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.071 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.071 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.071 "name": "raid_bdev1", 00:16:22.071 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:22.071 "strip_size_kb": 0, 00:16:22.072 "state": "online", 00:16:22.072 "raid_level": "raid1", 00:16:22.072 "superblock": true, 00:16:22.072 "num_base_bdevs": 2, 00:16:22.072 "num_base_bdevs_discovered": 1, 00:16:22.072 "num_base_bdevs_operational": 1, 00:16:22.072 "base_bdevs_list": [ 00:16:22.072 { 00:16:22.072 "name": null, 00:16:22.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.072 "is_configured": false, 00:16:22.072 "data_offset": 0, 00:16:22.072 "data_size": 7936 00:16:22.072 }, 00:16:22.072 { 00:16:22.072 "name": "BaseBdev2", 00:16:22.072 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:22.072 "is_configured": true, 00:16:22.072 "data_offset": 256, 00:16:22.072 "data_size": 7936 00:16:22.072 } 00:16:22.072 ] 00:16:22.072 }' 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.072 [2024-11-18 23:11:41.322253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:22.072 [2024-11-18 23:11:41.322316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.072 [2024-11-18 23:11:41.322335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:22.072 [2024-11-18 23:11:41.322345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.072 [2024-11-18 23:11:41.322511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.072 [2024-11-18 23:11:41.322532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:22.072 [2024-11-18 23:11:41.322579] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:22.072 [2024-11-18 23:11:41.322602] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:22.072 [2024-11-18 23:11:41.322609] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:22.072 [2024-11-18 23:11:41.322620] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:22.072 BaseBdev1 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.072 23:11:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.009 "name": "raid_bdev1", 00:16:23.009 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:23.009 "strip_size_kb": 0, 00:16:23.009 "state": "online", 00:16:23.009 "raid_level": "raid1", 00:16:23.009 "superblock": true, 00:16:23.009 "num_base_bdevs": 2, 00:16:23.009 "num_base_bdevs_discovered": 1, 00:16:23.009 "num_base_bdevs_operational": 1, 00:16:23.009 "base_bdevs_list": [ 00:16:23.009 { 00:16:23.009 "name": null, 00:16:23.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.009 "is_configured": false, 00:16:23.009 "data_offset": 0, 00:16:23.009 "data_size": 7936 00:16:23.009 }, 00:16:23.009 { 00:16:23.009 "name": "BaseBdev2", 00:16:23.009 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:23.009 "is_configured": true, 00:16:23.009 "data_offset": 256, 00:16:23.009 "data_size": 7936 00:16:23.009 } 00:16:23.009 ] 00:16:23.009 }' 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.009 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.578 "name": "raid_bdev1", 00:16:23.578 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:23.578 "strip_size_kb": 0, 00:16:23.578 "state": "online", 00:16:23.578 "raid_level": "raid1", 00:16:23.578 "superblock": true, 00:16:23.578 "num_base_bdevs": 2, 00:16:23.578 "num_base_bdevs_discovered": 1, 00:16:23.578 "num_base_bdevs_operational": 1, 00:16:23.578 "base_bdevs_list": [ 00:16:23.578 { 00:16:23.578 "name": null, 00:16:23.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.578 "is_configured": false, 00:16:23.578 "data_offset": 0, 00:16:23.578 "data_size": 7936 00:16:23.578 }, 00:16:23.578 { 00:16:23.578 "name": "BaseBdev2", 00:16:23.578 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:23.578 "is_configured": true, 00:16:23.578 "data_offset": 256, 00:16:23.578 "data_size": 7936 00:16:23.578 } 00:16:23.578 ] 00:16:23.578 }' 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.578 [2024-11-18 23:11:42.899548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.578 [2024-11-18 23:11:42.899666] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:23.578 [2024-11-18 23:11:42.899678] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:23.578 request: 00:16:23.578 { 00:16:23.578 "base_bdev": "BaseBdev1", 00:16:23.578 "raid_bdev": "raid_bdev1", 00:16:23.578 "method": "bdev_raid_add_base_bdev", 00:16:23.578 "req_id": 1 00:16:23.578 } 00:16:23.578 Got JSON-RPC error response 00:16:23.578 response: 00:16:23.578 { 00:16:23.578 "code": -22, 00:16:23.578 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:23.578 } 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.578 23:11:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:24.540 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:24.540 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.540 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.540 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.540 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.540 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:24.540 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.540 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.540 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.540 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.800 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.800 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.800 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.800 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.800 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.800 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.800 "name": "raid_bdev1", 00:16:24.800 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:24.800 "strip_size_kb": 0, 00:16:24.800 "state": "online", 00:16:24.800 "raid_level": "raid1", 00:16:24.800 "superblock": true, 00:16:24.800 "num_base_bdevs": 2, 00:16:24.800 "num_base_bdevs_discovered": 1, 00:16:24.800 "num_base_bdevs_operational": 1, 00:16:24.800 "base_bdevs_list": [ 00:16:24.800 { 00:16:24.800 "name": null, 00:16:24.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.800 "is_configured": false, 00:16:24.800 "data_offset": 0, 00:16:24.800 "data_size": 7936 00:16:24.800 }, 00:16:24.800 { 00:16:24.800 "name": "BaseBdev2", 00:16:24.800 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:24.800 "is_configured": true, 00:16:24.800 "data_offset": 256, 00:16:24.800 "data_size": 7936 00:16:24.800 } 00:16:24.800 ] 00:16:24.800 }' 00:16:24.800 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.800 23:11:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.060 "name": "raid_bdev1", 00:16:25.060 "uuid": "707b14ab-06ac-4c3b-9b5d-240b3097d319", 00:16:25.060 "strip_size_kb": 0, 00:16:25.060 "state": "online", 00:16:25.060 "raid_level": "raid1", 00:16:25.060 "superblock": true, 00:16:25.060 "num_base_bdevs": 2, 00:16:25.060 "num_base_bdevs_discovered": 1, 00:16:25.060 "num_base_bdevs_operational": 1, 00:16:25.060 "base_bdevs_list": [ 00:16:25.060 { 00:16:25.060 "name": null, 00:16:25.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.060 "is_configured": false, 00:16:25.060 "data_offset": 0, 00:16:25.060 "data_size": 7936 00:16:25.060 }, 00:16:25.060 { 00:16:25.060 "name": "BaseBdev2", 00:16:25.060 "uuid": "0f54e585-9f01-5369-9522-79c0e099d2b3", 00:16:25.060 "is_configured": true, 00:16:25.060 "data_offset": 256, 00:16:25.060 "data_size": 7936 00:16:25.060 } 00:16:25.060 ] 00:16:25.060 }' 00:16:25.060 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98048 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98048 ']' 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98048 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98048 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:25.320 killing process with pid 98048 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98048' 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98048 00:16:25.320 Received shutdown signal, test time was about 60.000000 seconds 00:16:25.320 00:16:25.320 Latency(us) 00:16:25.320 [2024-11-18T23:11:44.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.320 [2024-11-18T23:11:44.698Z] =================================================================================================================== 00:16:25.320 [2024-11-18T23:11:44.698Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:25.320 [2024-11-18 23:11:44.543530] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.320 [2024-11-18 23:11:44.543651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.320 [2024-11-18 23:11:44.543701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.320 [2024-11-18 23:11:44.543713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:25.320 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98048 00:16:25.320 [2024-11-18 23:11:44.575971] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.580 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:25.580 00:16:25.580 real 0m18.463s 00:16:25.580 user 0m24.481s 00:16:25.580 sys 0m2.759s 00:16:25.580 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.580 23:11:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.580 ************************************ 00:16:25.580 END TEST raid_rebuild_test_sb_md_separate 00:16:25.580 ************************************ 00:16:25.580 23:11:44 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:25.580 23:11:44 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:25.580 23:11:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:25.580 23:11:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.580 23:11:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.580 ************************************ 00:16:25.580 START TEST raid_state_function_test_sb_md_interleaved 00:16:25.580 ************************************ 00:16:25.580 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98729 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98729' 00:16:25.581 Process raid pid: 98729 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98729 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98729 ']' 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.581 23:11:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.841 [2024-11-18 23:11:44.978150] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:25.841 [2024-11-18 23:11:44.978301] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.841 [2024-11-18 23:11:45.137864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.841 [2024-11-18 23:11:45.183267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.100 [2024-11-18 23:11:45.225306] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.101 [2024-11-18 23:11:45.225340] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.670 [2024-11-18 23:11:45.802600] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.670 [2024-11-18 23:11:45.802652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.670 [2024-11-18 23:11:45.802670] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.670 [2024-11-18 23:11:45.802680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.670 "name": "Existed_Raid", 00:16:26.670 "uuid": "ded3a223-c095-43d2-95ac-d9f9fc0ac7e9", 00:16:26.670 "strip_size_kb": 0, 00:16:26.670 "state": "configuring", 00:16:26.670 "raid_level": "raid1", 00:16:26.670 "superblock": true, 00:16:26.670 "num_base_bdevs": 2, 00:16:26.670 "num_base_bdevs_discovered": 0, 00:16:26.670 "num_base_bdevs_operational": 2, 00:16:26.670 "base_bdevs_list": [ 00:16:26.670 { 00:16:26.670 "name": "BaseBdev1", 00:16:26.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.670 "is_configured": false, 00:16:26.670 "data_offset": 0, 00:16:26.670 "data_size": 0 00:16:26.670 }, 00:16:26.670 { 00:16:26.670 "name": "BaseBdev2", 00:16:26.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.670 "is_configured": false, 00:16:26.670 "data_offset": 0, 00:16:26.670 "data_size": 0 00:16:26.670 } 00:16:26.670 ] 00:16:26.670 }' 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.670 23:11:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.930 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:26.930 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.930 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.930 [2024-11-18 23:11:46.281660] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.930 [2024-11-18 23:11:46.281711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:26.930 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.930 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:26.930 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.930 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.930 [2024-11-18 23:11:46.293680] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.931 [2024-11-18 23:11:46.293718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.931 [2024-11-18 23:11:46.293726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.931 [2024-11-18 23:11:46.293735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.931 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.931 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:26.931 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.931 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.191 [2024-11-18 23:11:46.314614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.191 BaseBdev1 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.191 [ 00:16:27.191 { 00:16:27.191 "name": "BaseBdev1", 00:16:27.191 "aliases": [ 00:16:27.191 "dd65da00-4f1d-4e56-9b77-18b962514988" 00:16:27.191 ], 00:16:27.191 "product_name": "Malloc disk", 00:16:27.191 "block_size": 4128, 00:16:27.191 "num_blocks": 8192, 00:16:27.191 "uuid": "dd65da00-4f1d-4e56-9b77-18b962514988", 00:16:27.191 "md_size": 32, 00:16:27.191 "md_interleave": true, 00:16:27.191 "dif_type": 0, 00:16:27.191 "assigned_rate_limits": { 00:16:27.191 "rw_ios_per_sec": 0, 00:16:27.191 "rw_mbytes_per_sec": 0, 00:16:27.191 "r_mbytes_per_sec": 0, 00:16:27.191 "w_mbytes_per_sec": 0 00:16:27.191 }, 00:16:27.191 "claimed": true, 00:16:27.191 "claim_type": "exclusive_write", 00:16:27.191 "zoned": false, 00:16:27.191 "supported_io_types": { 00:16:27.191 "read": true, 00:16:27.191 "write": true, 00:16:27.191 "unmap": true, 00:16:27.191 "flush": true, 00:16:27.191 "reset": true, 00:16:27.191 "nvme_admin": false, 00:16:27.191 "nvme_io": false, 00:16:27.191 "nvme_io_md": false, 00:16:27.191 "write_zeroes": true, 00:16:27.191 "zcopy": true, 00:16:27.191 "get_zone_info": false, 00:16:27.191 "zone_management": false, 00:16:27.191 "zone_append": false, 00:16:27.191 "compare": false, 00:16:27.191 "compare_and_write": false, 00:16:27.191 "abort": true, 00:16:27.191 "seek_hole": false, 00:16:27.191 "seek_data": false, 00:16:27.191 "copy": true, 00:16:27.191 "nvme_iov_md": false 00:16:27.191 }, 00:16:27.191 "memory_domains": [ 00:16:27.191 { 00:16:27.191 "dma_device_id": "system", 00:16:27.191 "dma_device_type": 1 00:16:27.191 }, 00:16:27.191 { 00:16:27.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.191 "dma_device_type": 2 00:16:27.191 } 00:16:27.191 ], 00:16:27.191 "driver_specific": {} 00:16:27.191 } 00:16:27.191 ] 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.191 "name": "Existed_Raid", 00:16:27.191 "uuid": "adedf0fd-7909-4c0a-a802-857dc37e2c9c", 00:16:27.191 "strip_size_kb": 0, 00:16:27.191 "state": "configuring", 00:16:27.191 "raid_level": "raid1", 00:16:27.191 "superblock": true, 00:16:27.191 "num_base_bdevs": 2, 00:16:27.191 "num_base_bdevs_discovered": 1, 00:16:27.191 "num_base_bdevs_operational": 2, 00:16:27.191 "base_bdevs_list": [ 00:16:27.191 { 00:16:27.191 "name": "BaseBdev1", 00:16:27.191 "uuid": "dd65da00-4f1d-4e56-9b77-18b962514988", 00:16:27.191 "is_configured": true, 00:16:27.191 "data_offset": 256, 00:16:27.191 "data_size": 7936 00:16:27.191 }, 00:16:27.191 { 00:16:27.191 "name": "BaseBdev2", 00:16:27.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.191 "is_configured": false, 00:16:27.191 "data_offset": 0, 00:16:27.191 "data_size": 0 00:16:27.191 } 00:16:27.191 ] 00:16:27.191 }' 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.191 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.451 [2024-11-18 23:11:46.793818] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.451 [2024-11-18 23:11:46.793859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.451 [2024-11-18 23:11:46.805865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.451 [2024-11-18 23:11:46.807637] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.451 [2024-11-18 23:11:46.807676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.451 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.452 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.711 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.711 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.711 "name": "Existed_Raid", 00:16:27.711 "uuid": "f6a58548-53aa-4437-ba07-73c084bb767b", 00:16:27.711 "strip_size_kb": 0, 00:16:27.711 "state": "configuring", 00:16:27.711 "raid_level": "raid1", 00:16:27.711 "superblock": true, 00:16:27.711 "num_base_bdevs": 2, 00:16:27.712 "num_base_bdevs_discovered": 1, 00:16:27.712 "num_base_bdevs_operational": 2, 00:16:27.712 "base_bdevs_list": [ 00:16:27.712 { 00:16:27.712 "name": "BaseBdev1", 00:16:27.712 "uuid": "dd65da00-4f1d-4e56-9b77-18b962514988", 00:16:27.712 "is_configured": true, 00:16:27.712 "data_offset": 256, 00:16:27.712 "data_size": 7936 00:16:27.712 }, 00:16:27.712 { 00:16:27.712 "name": "BaseBdev2", 00:16:27.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.712 "is_configured": false, 00:16:27.712 "data_offset": 0, 00:16:27.712 "data_size": 0 00:16:27.712 } 00:16:27.712 ] 00:16:27.712 }' 00:16:27.712 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.712 23:11:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.972 [2024-11-18 23:11:47.292318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.972 [2024-11-18 23:11:47.292856] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:27.972 [2024-11-18 23:11:47.292924] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:27.972 [2024-11-18 23:11:47.293270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:27.972 BaseBdev2 00:16:27.972 [2024-11-18 23:11:47.293537] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:27.972 [2024-11-18 23:11:47.293606] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:27.972 [2024-11-18 23:11:47.293779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.972 [ 00:16:27.972 { 00:16:27.972 "name": "BaseBdev2", 00:16:27.972 "aliases": [ 00:16:27.972 "942528d3-25d6-4283-9818-9c3834efa28c" 00:16:27.972 ], 00:16:27.972 "product_name": "Malloc disk", 00:16:27.972 "block_size": 4128, 00:16:27.972 "num_blocks": 8192, 00:16:27.972 "uuid": "942528d3-25d6-4283-9818-9c3834efa28c", 00:16:27.972 "md_size": 32, 00:16:27.972 "md_interleave": true, 00:16:27.972 "dif_type": 0, 00:16:27.972 "assigned_rate_limits": { 00:16:27.972 "rw_ios_per_sec": 0, 00:16:27.972 "rw_mbytes_per_sec": 0, 00:16:27.972 "r_mbytes_per_sec": 0, 00:16:27.972 "w_mbytes_per_sec": 0 00:16:27.972 }, 00:16:27.972 "claimed": true, 00:16:27.972 "claim_type": "exclusive_write", 00:16:27.972 "zoned": false, 00:16:27.972 "supported_io_types": { 00:16:27.972 "read": true, 00:16:27.972 "write": true, 00:16:27.972 "unmap": true, 00:16:27.972 "flush": true, 00:16:27.972 "reset": true, 00:16:27.972 "nvme_admin": false, 00:16:27.972 "nvme_io": false, 00:16:27.972 "nvme_io_md": false, 00:16:27.972 "write_zeroes": true, 00:16:27.972 "zcopy": true, 00:16:27.972 "get_zone_info": false, 00:16:27.972 "zone_management": false, 00:16:27.972 "zone_append": false, 00:16:27.972 "compare": false, 00:16:27.972 "compare_and_write": false, 00:16:27.972 "abort": true, 00:16:27.972 "seek_hole": false, 00:16:27.972 "seek_data": false, 00:16:27.972 "copy": true, 00:16:27.972 "nvme_iov_md": false 00:16:27.972 }, 00:16:27.972 "memory_domains": [ 00:16:27.972 { 00:16:27.972 "dma_device_id": "system", 00:16:27.972 "dma_device_type": 1 00:16:27.972 }, 00:16:27.972 { 00:16:27.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.972 "dma_device_type": 2 00:16:27.972 } 00:16:27.972 ], 00:16:27.972 "driver_specific": {} 00:16:27.972 } 00:16:27.972 ] 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.972 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.973 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.973 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.233 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.233 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.233 "name": "Existed_Raid", 00:16:28.233 "uuid": "f6a58548-53aa-4437-ba07-73c084bb767b", 00:16:28.233 "strip_size_kb": 0, 00:16:28.233 "state": "online", 00:16:28.233 "raid_level": "raid1", 00:16:28.233 "superblock": true, 00:16:28.233 "num_base_bdevs": 2, 00:16:28.233 "num_base_bdevs_discovered": 2, 00:16:28.233 "num_base_bdevs_operational": 2, 00:16:28.233 "base_bdevs_list": [ 00:16:28.233 { 00:16:28.233 "name": "BaseBdev1", 00:16:28.233 "uuid": "dd65da00-4f1d-4e56-9b77-18b962514988", 00:16:28.233 "is_configured": true, 00:16:28.233 "data_offset": 256, 00:16:28.233 "data_size": 7936 00:16:28.233 }, 00:16:28.233 { 00:16:28.233 "name": "BaseBdev2", 00:16:28.233 "uuid": "942528d3-25d6-4283-9818-9c3834efa28c", 00:16:28.233 "is_configured": true, 00:16:28.233 "data_offset": 256, 00:16:28.233 "data_size": 7936 00:16:28.233 } 00:16:28.233 ] 00:16:28.233 }' 00:16:28.233 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.233 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.491 [2024-11-18 23:11:47.791704] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.491 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:28.491 "name": "Existed_Raid", 00:16:28.491 "aliases": [ 00:16:28.491 "f6a58548-53aa-4437-ba07-73c084bb767b" 00:16:28.491 ], 00:16:28.491 "product_name": "Raid Volume", 00:16:28.491 "block_size": 4128, 00:16:28.491 "num_blocks": 7936, 00:16:28.491 "uuid": "f6a58548-53aa-4437-ba07-73c084bb767b", 00:16:28.491 "md_size": 32, 00:16:28.491 "md_interleave": true, 00:16:28.491 "dif_type": 0, 00:16:28.491 "assigned_rate_limits": { 00:16:28.491 "rw_ios_per_sec": 0, 00:16:28.491 "rw_mbytes_per_sec": 0, 00:16:28.491 "r_mbytes_per_sec": 0, 00:16:28.491 "w_mbytes_per_sec": 0 00:16:28.491 }, 00:16:28.491 "claimed": false, 00:16:28.491 "zoned": false, 00:16:28.491 "supported_io_types": { 00:16:28.491 "read": true, 00:16:28.491 "write": true, 00:16:28.491 "unmap": false, 00:16:28.491 "flush": false, 00:16:28.491 "reset": true, 00:16:28.491 "nvme_admin": false, 00:16:28.491 "nvme_io": false, 00:16:28.491 "nvme_io_md": false, 00:16:28.491 "write_zeroes": true, 00:16:28.491 "zcopy": false, 00:16:28.491 "get_zone_info": false, 00:16:28.491 "zone_management": false, 00:16:28.491 "zone_append": false, 00:16:28.491 "compare": false, 00:16:28.491 "compare_and_write": false, 00:16:28.491 "abort": false, 00:16:28.491 "seek_hole": false, 00:16:28.491 "seek_data": false, 00:16:28.491 "copy": false, 00:16:28.491 "nvme_iov_md": false 00:16:28.491 }, 00:16:28.491 "memory_domains": [ 00:16:28.491 { 00:16:28.491 "dma_device_id": "system", 00:16:28.491 "dma_device_type": 1 00:16:28.491 }, 00:16:28.491 { 00:16:28.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.491 "dma_device_type": 2 00:16:28.491 }, 00:16:28.491 { 00:16:28.491 "dma_device_id": "system", 00:16:28.491 "dma_device_type": 1 00:16:28.491 }, 00:16:28.491 { 00:16:28.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.491 "dma_device_type": 2 00:16:28.491 } 00:16:28.491 ], 00:16:28.491 "driver_specific": { 00:16:28.491 "raid": { 00:16:28.491 "uuid": "f6a58548-53aa-4437-ba07-73c084bb767b", 00:16:28.491 "strip_size_kb": 0, 00:16:28.491 "state": "online", 00:16:28.491 "raid_level": "raid1", 00:16:28.491 "superblock": true, 00:16:28.491 "num_base_bdevs": 2, 00:16:28.491 "num_base_bdevs_discovered": 2, 00:16:28.491 "num_base_bdevs_operational": 2, 00:16:28.491 "base_bdevs_list": [ 00:16:28.491 { 00:16:28.491 "name": "BaseBdev1", 00:16:28.491 "uuid": "dd65da00-4f1d-4e56-9b77-18b962514988", 00:16:28.491 "is_configured": true, 00:16:28.491 "data_offset": 256, 00:16:28.491 "data_size": 7936 00:16:28.491 }, 00:16:28.491 { 00:16:28.491 "name": "BaseBdev2", 00:16:28.491 "uuid": "942528d3-25d6-4283-9818-9c3834efa28c", 00:16:28.491 "is_configured": true, 00:16:28.491 "data_offset": 256, 00:16:28.492 "data_size": 7936 00:16:28.492 } 00:16:28.492 ] 00:16:28.492 } 00:16:28.492 } 00:16:28.492 }' 00:16:28.492 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:28.751 BaseBdev2' 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.751 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:28.752 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.752 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.752 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.752 23:11:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.752 [2024-11-18 23:11:48.031077] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.752 "name": "Existed_Raid", 00:16:28.752 "uuid": "f6a58548-53aa-4437-ba07-73c084bb767b", 00:16:28.752 "strip_size_kb": 0, 00:16:28.752 "state": "online", 00:16:28.752 "raid_level": "raid1", 00:16:28.752 "superblock": true, 00:16:28.752 "num_base_bdevs": 2, 00:16:28.752 "num_base_bdevs_discovered": 1, 00:16:28.752 "num_base_bdevs_operational": 1, 00:16:28.752 "base_bdevs_list": [ 00:16:28.752 { 00:16:28.752 "name": null, 00:16:28.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.752 "is_configured": false, 00:16:28.752 "data_offset": 0, 00:16:28.752 "data_size": 7936 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "name": "BaseBdev2", 00:16:28.752 "uuid": "942528d3-25d6-4283-9818-9c3834efa28c", 00:16:28.752 "is_configured": true, 00:16:28.752 "data_offset": 256, 00:16:28.752 "data_size": 7936 00:16:28.752 } 00:16:28.752 ] 00:16:28.752 }' 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.752 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.321 [2024-11-18 23:11:48.525697] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.321 [2024-11-18 23:11:48.525796] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.321 [2024-11-18 23:11:48.537557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.321 [2024-11-18 23:11:48.537625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.321 [2024-11-18 23:11:48.537638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98729 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98729 ']' 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98729 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98729 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:29.321 killing process with pid 98729 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98729' 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98729 00:16:29.321 [2024-11-18 23:11:48.624811] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:29.321 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98729 00:16:29.321 [2024-11-18 23:11:48.625787] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.581 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:29.581 00:16:29.581 real 0m3.987s 00:16:29.581 user 0m6.233s 00:16:29.581 sys 0m0.897s 00:16:29.581 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:29.581 23:11:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.581 ************************************ 00:16:29.581 END TEST raid_state_function_test_sb_md_interleaved 00:16:29.581 ************************************ 00:16:29.581 23:11:48 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:29.581 23:11:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:29.581 23:11:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:29.581 23:11:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.581 ************************************ 00:16:29.581 START TEST raid_superblock_test_md_interleaved 00:16:29.581 ************************************ 00:16:29.581 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:29.581 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:29.581 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:29.581 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:29.581 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=98965 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:29.582 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 98965 00:16:29.842 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98965 ']' 00:16:29.842 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.842 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:29.842 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.842 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:29.842 23:11:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.842 [2024-11-18 23:11:49.039036] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:29.842 [2024-11-18 23:11:49.039155] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98965 ] 00:16:29.842 [2024-11-18 23:11:49.199386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.100 [2024-11-18 23:11:49.246717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.100 [2024-11-18 23:11:49.287983] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.100 [2024-11-18 23:11:49.288030] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.670 malloc1 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.670 [2024-11-18 23:11:49.869945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:30.670 [2024-11-18 23:11:49.870015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.670 [2024-11-18 23:11:49.870034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.670 [2024-11-18 23:11:49.870051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.670 [2024-11-18 23:11:49.871890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.670 [2024-11-18 23:11:49.871929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:30.670 pt1 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.670 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.670 malloc2 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.671 [2024-11-18 23:11:49.910383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.671 [2024-11-18 23:11:49.910448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.671 [2024-11-18 23:11:49.910462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:30.671 [2024-11-18 23:11:49.910472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.671 [2024-11-18 23:11:49.912253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.671 [2024-11-18 23:11:49.912300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.671 pt2 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.671 [2024-11-18 23:11:49.922383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:30.671 [2024-11-18 23:11:49.924164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.671 [2024-11-18 23:11:49.924317] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:30.671 [2024-11-18 23:11:49.924335] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:30.671 [2024-11-18 23:11:49.924411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:30.671 [2024-11-18 23:11:49.924487] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:30.671 [2024-11-18 23:11:49.924515] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:30.671 [2024-11-18 23:11:49.924600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.671 "name": "raid_bdev1", 00:16:30.671 "uuid": "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4", 00:16:30.671 "strip_size_kb": 0, 00:16:30.671 "state": "online", 00:16:30.671 "raid_level": "raid1", 00:16:30.671 "superblock": true, 00:16:30.671 "num_base_bdevs": 2, 00:16:30.671 "num_base_bdevs_discovered": 2, 00:16:30.671 "num_base_bdevs_operational": 2, 00:16:30.671 "base_bdevs_list": [ 00:16:30.671 { 00:16:30.671 "name": "pt1", 00:16:30.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.671 "is_configured": true, 00:16:30.671 "data_offset": 256, 00:16:30.671 "data_size": 7936 00:16:30.671 }, 00:16:30.671 { 00:16:30.671 "name": "pt2", 00:16:30.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.671 "is_configured": true, 00:16:30.671 "data_offset": 256, 00:16:30.671 "data_size": 7936 00:16:30.671 } 00:16:30.671 ] 00:16:30.671 }' 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.671 23:11:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.239 [2024-11-18 23:11:50.353879] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.239 "name": "raid_bdev1", 00:16:31.239 "aliases": [ 00:16:31.239 "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4" 00:16:31.239 ], 00:16:31.239 "product_name": "Raid Volume", 00:16:31.239 "block_size": 4128, 00:16:31.239 "num_blocks": 7936, 00:16:31.239 "uuid": "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4", 00:16:31.239 "md_size": 32, 00:16:31.239 "md_interleave": true, 00:16:31.239 "dif_type": 0, 00:16:31.239 "assigned_rate_limits": { 00:16:31.239 "rw_ios_per_sec": 0, 00:16:31.239 "rw_mbytes_per_sec": 0, 00:16:31.239 "r_mbytes_per_sec": 0, 00:16:31.239 "w_mbytes_per_sec": 0 00:16:31.239 }, 00:16:31.239 "claimed": false, 00:16:31.239 "zoned": false, 00:16:31.239 "supported_io_types": { 00:16:31.239 "read": true, 00:16:31.239 "write": true, 00:16:31.239 "unmap": false, 00:16:31.239 "flush": false, 00:16:31.239 "reset": true, 00:16:31.239 "nvme_admin": false, 00:16:31.239 "nvme_io": false, 00:16:31.239 "nvme_io_md": false, 00:16:31.239 "write_zeroes": true, 00:16:31.239 "zcopy": false, 00:16:31.239 "get_zone_info": false, 00:16:31.239 "zone_management": false, 00:16:31.239 "zone_append": false, 00:16:31.239 "compare": false, 00:16:31.239 "compare_and_write": false, 00:16:31.239 "abort": false, 00:16:31.239 "seek_hole": false, 00:16:31.239 "seek_data": false, 00:16:31.239 "copy": false, 00:16:31.239 "nvme_iov_md": false 00:16:31.239 }, 00:16:31.239 "memory_domains": [ 00:16:31.239 { 00:16:31.239 "dma_device_id": "system", 00:16:31.239 "dma_device_type": 1 00:16:31.239 }, 00:16:31.239 { 00:16:31.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.239 "dma_device_type": 2 00:16:31.239 }, 00:16:31.239 { 00:16:31.239 "dma_device_id": "system", 00:16:31.239 "dma_device_type": 1 00:16:31.239 }, 00:16:31.239 { 00:16:31.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.239 "dma_device_type": 2 00:16:31.239 } 00:16:31.239 ], 00:16:31.239 "driver_specific": { 00:16:31.239 "raid": { 00:16:31.239 "uuid": "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4", 00:16:31.239 "strip_size_kb": 0, 00:16:31.239 "state": "online", 00:16:31.239 "raid_level": "raid1", 00:16:31.239 "superblock": true, 00:16:31.239 "num_base_bdevs": 2, 00:16:31.239 "num_base_bdevs_discovered": 2, 00:16:31.239 "num_base_bdevs_operational": 2, 00:16:31.239 "base_bdevs_list": [ 00:16:31.239 { 00:16:31.239 "name": "pt1", 00:16:31.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.239 "is_configured": true, 00:16:31.239 "data_offset": 256, 00:16:31.239 "data_size": 7936 00:16:31.239 }, 00:16:31.239 { 00:16:31.239 "name": "pt2", 00:16:31.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.239 "is_configured": true, 00:16:31.239 "data_offset": 256, 00:16:31.239 "data_size": 7936 00:16:31.239 } 00:16:31.239 ] 00:16:31.239 } 00:16:31.239 } 00:16:31.239 }' 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:31.239 pt2' 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.239 [2024-11-18 23:11:50.589396] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.239 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ad5fcbc9-144a-4a70-a60a-96f8f6098ab4 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z ad5fcbc9-144a-4a70-a60a-96f8f6098ab4 ']' 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.498 [2024-11-18 23:11:50.633097] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.498 [2024-11-18 23:11:50.633123] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.498 [2024-11-18 23:11:50.633181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.498 [2024-11-18 23:11:50.633241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.498 [2024-11-18 23:11:50.633251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.498 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 [2024-11-18 23:11:50.768884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:31.499 [2024-11-18 23:11:50.770612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:31.499 [2024-11-18 23:11:50.770671] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:31.499 [2024-11-18 23:11:50.770716] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:31.499 [2024-11-18 23:11:50.770731] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.499 [2024-11-18 23:11:50.770739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:31.499 request: 00:16:31.499 { 00:16:31.499 "name": "raid_bdev1", 00:16:31.499 "raid_level": "raid1", 00:16:31.499 "base_bdevs": [ 00:16:31.499 "malloc1", 00:16:31.499 "malloc2" 00:16:31.499 ], 00:16:31.499 "superblock": false, 00:16:31.499 "method": "bdev_raid_create", 00:16:31.499 "req_id": 1 00:16:31.499 } 00:16:31.499 Got JSON-RPC error response 00:16:31.499 response: 00:16:31.499 { 00:16:31.499 "code": -17, 00:16:31.499 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:31.499 } 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 [2024-11-18 23:11:50.824749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:31.499 [2024-11-18 23:11:50.824790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.499 [2024-11-18 23:11:50.824805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:31.499 [2024-11-18 23:11:50.824813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.499 [2024-11-18 23:11:50.826609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.499 [2024-11-18 23:11:50.826640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:31.499 [2024-11-18 23:11:50.826680] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:31.499 [2024-11-18 23:11:50.826719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.499 pt1 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.759 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.759 "name": "raid_bdev1", 00:16:31.759 "uuid": "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4", 00:16:31.759 "strip_size_kb": 0, 00:16:31.759 "state": "configuring", 00:16:31.759 "raid_level": "raid1", 00:16:31.759 "superblock": true, 00:16:31.759 "num_base_bdevs": 2, 00:16:31.759 "num_base_bdevs_discovered": 1, 00:16:31.759 "num_base_bdevs_operational": 2, 00:16:31.759 "base_bdevs_list": [ 00:16:31.759 { 00:16:31.759 "name": "pt1", 00:16:31.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.759 "is_configured": true, 00:16:31.759 "data_offset": 256, 00:16:31.759 "data_size": 7936 00:16:31.759 }, 00:16:31.759 { 00:16:31.759 "name": null, 00:16:31.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.759 "is_configured": false, 00:16:31.759 "data_offset": 256, 00:16:31.759 "data_size": 7936 00:16:31.759 } 00:16:31.759 ] 00:16:31.759 }' 00:16:31.759 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.759 23:11:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.019 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:32.019 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:32.019 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:32.019 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.019 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.019 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.019 [2024-11-18 23:11:51.256029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.019 [2024-11-18 23:11:51.256079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.019 [2024-11-18 23:11:51.256096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:32.019 [2024-11-18 23:11:51.256104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.019 [2024-11-18 23:11:51.256204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.019 [2024-11-18 23:11:51.256215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.019 [2024-11-18 23:11:51.256251] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:32.019 [2024-11-18 23:11:51.256268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.019 [2024-11-18 23:11:51.256371] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:32.019 [2024-11-18 23:11:51.256384] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:32.019 [2024-11-18 23:11:51.256470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:32.019 [2024-11-18 23:11:51.256535] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:32.020 [2024-11-18 23:11:51.256552] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:32.020 [2024-11-18 23:11:51.256604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.020 pt2 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.020 "name": "raid_bdev1", 00:16:32.020 "uuid": "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4", 00:16:32.020 "strip_size_kb": 0, 00:16:32.020 "state": "online", 00:16:32.020 "raid_level": "raid1", 00:16:32.020 "superblock": true, 00:16:32.020 "num_base_bdevs": 2, 00:16:32.020 "num_base_bdevs_discovered": 2, 00:16:32.020 "num_base_bdevs_operational": 2, 00:16:32.020 "base_bdevs_list": [ 00:16:32.020 { 00:16:32.020 "name": "pt1", 00:16:32.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.020 "is_configured": true, 00:16:32.020 "data_offset": 256, 00:16:32.020 "data_size": 7936 00:16:32.020 }, 00:16:32.020 { 00:16:32.020 "name": "pt2", 00:16:32.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.020 "is_configured": true, 00:16:32.020 "data_offset": 256, 00:16:32.020 "data_size": 7936 00:16:32.020 } 00:16:32.020 ] 00:16:32.020 }' 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.020 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.280 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:32.280 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:32.280 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.280 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.280 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.280 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.280 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.280 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.280 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.280 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.540 [2024-11-18 23:11:51.659616] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.540 "name": "raid_bdev1", 00:16:32.540 "aliases": [ 00:16:32.540 "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4" 00:16:32.540 ], 00:16:32.540 "product_name": "Raid Volume", 00:16:32.540 "block_size": 4128, 00:16:32.540 "num_blocks": 7936, 00:16:32.540 "uuid": "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4", 00:16:32.540 "md_size": 32, 00:16:32.540 "md_interleave": true, 00:16:32.540 "dif_type": 0, 00:16:32.540 "assigned_rate_limits": { 00:16:32.540 "rw_ios_per_sec": 0, 00:16:32.540 "rw_mbytes_per_sec": 0, 00:16:32.540 "r_mbytes_per_sec": 0, 00:16:32.540 "w_mbytes_per_sec": 0 00:16:32.540 }, 00:16:32.540 "claimed": false, 00:16:32.540 "zoned": false, 00:16:32.540 "supported_io_types": { 00:16:32.540 "read": true, 00:16:32.540 "write": true, 00:16:32.540 "unmap": false, 00:16:32.540 "flush": false, 00:16:32.540 "reset": true, 00:16:32.540 "nvme_admin": false, 00:16:32.540 "nvme_io": false, 00:16:32.540 "nvme_io_md": false, 00:16:32.540 "write_zeroes": true, 00:16:32.540 "zcopy": false, 00:16:32.540 "get_zone_info": false, 00:16:32.540 "zone_management": false, 00:16:32.540 "zone_append": false, 00:16:32.540 "compare": false, 00:16:32.540 "compare_and_write": false, 00:16:32.540 "abort": false, 00:16:32.540 "seek_hole": false, 00:16:32.540 "seek_data": false, 00:16:32.540 "copy": false, 00:16:32.540 "nvme_iov_md": false 00:16:32.540 }, 00:16:32.540 "memory_domains": [ 00:16:32.540 { 00:16:32.540 "dma_device_id": "system", 00:16:32.540 "dma_device_type": 1 00:16:32.540 }, 00:16:32.540 { 00:16:32.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.540 "dma_device_type": 2 00:16:32.540 }, 00:16:32.540 { 00:16:32.540 "dma_device_id": "system", 00:16:32.540 "dma_device_type": 1 00:16:32.540 }, 00:16:32.540 { 00:16:32.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.540 "dma_device_type": 2 00:16:32.540 } 00:16:32.540 ], 00:16:32.540 "driver_specific": { 00:16:32.540 "raid": { 00:16:32.540 "uuid": "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4", 00:16:32.540 "strip_size_kb": 0, 00:16:32.540 "state": "online", 00:16:32.540 "raid_level": "raid1", 00:16:32.540 "superblock": true, 00:16:32.540 "num_base_bdevs": 2, 00:16:32.540 "num_base_bdevs_discovered": 2, 00:16:32.540 "num_base_bdevs_operational": 2, 00:16:32.540 "base_bdevs_list": [ 00:16:32.540 { 00:16:32.540 "name": "pt1", 00:16:32.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.540 "is_configured": true, 00:16:32.540 "data_offset": 256, 00:16:32.540 "data_size": 7936 00:16:32.540 }, 00:16:32.540 { 00:16:32.540 "name": "pt2", 00:16:32.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.540 "is_configured": true, 00:16:32.540 "data_offset": 256, 00:16:32.540 "data_size": 7936 00:16:32.540 } 00:16:32.540 ] 00:16:32.540 } 00:16:32.540 } 00:16:32.540 }' 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:32.540 pt2' 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.540 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.541 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.541 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:32.541 [2024-11-18 23:11:51.891172] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.541 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' ad5fcbc9-144a-4a70-a60a-96f8f6098ab4 '!=' ad5fcbc9-144a-4a70-a60a-96f8f6098ab4 ']' 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.800 [2024-11-18 23:11:51.938888] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.800 "name": "raid_bdev1", 00:16:32.800 "uuid": "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4", 00:16:32.800 "strip_size_kb": 0, 00:16:32.800 "state": "online", 00:16:32.800 "raid_level": "raid1", 00:16:32.800 "superblock": true, 00:16:32.800 "num_base_bdevs": 2, 00:16:32.800 "num_base_bdevs_discovered": 1, 00:16:32.800 "num_base_bdevs_operational": 1, 00:16:32.800 "base_bdevs_list": [ 00:16:32.800 { 00:16:32.800 "name": null, 00:16:32.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.800 "is_configured": false, 00:16:32.800 "data_offset": 0, 00:16:32.800 "data_size": 7936 00:16:32.800 }, 00:16:32.800 { 00:16:32.800 "name": "pt2", 00:16:32.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.800 "is_configured": true, 00:16:32.800 "data_offset": 256, 00:16:32.800 "data_size": 7936 00:16:32.800 } 00:16:32.800 ] 00:16:32.800 }' 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.800 23:11:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.060 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:33.060 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.060 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.060 [2024-11-18 23:11:52.402045] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.060 [2024-11-18 23:11:52.402071] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.060 [2024-11-18 23:11:52.402124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.060 [2024-11-18 23:11:52.402164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.060 [2024-11-18 23:11:52.402173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:33.060 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.060 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.060 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.060 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.060 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:33.060 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.320 [2024-11-18 23:11:52.473931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.320 [2024-11-18 23:11:52.473974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.320 [2024-11-18 23:11:52.473988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:33.320 [2024-11-18 23:11:52.473996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.320 [2024-11-18 23:11:52.475800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.320 [2024-11-18 23:11:52.475840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.320 [2024-11-18 23:11:52.475883] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:33.320 [2024-11-18 23:11:52.475911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.320 [2024-11-18 23:11:52.475959] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:33.320 [2024-11-18 23:11:52.475967] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:33.320 [2024-11-18 23:11:52.476045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:33.320 [2024-11-18 23:11:52.476109] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:33.320 [2024-11-18 23:11:52.476131] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:33.320 [2024-11-18 23:11:52.476179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.320 pt2 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.320 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.321 "name": "raid_bdev1", 00:16:33.321 "uuid": "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4", 00:16:33.321 "strip_size_kb": 0, 00:16:33.321 "state": "online", 00:16:33.321 "raid_level": "raid1", 00:16:33.321 "superblock": true, 00:16:33.321 "num_base_bdevs": 2, 00:16:33.321 "num_base_bdevs_discovered": 1, 00:16:33.321 "num_base_bdevs_operational": 1, 00:16:33.321 "base_bdevs_list": [ 00:16:33.321 { 00:16:33.321 "name": null, 00:16:33.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.321 "is_configured": false, 00:16:33.321 "data_offset": 256, 00:16:33.321 "data_size": 7936 00:16:33.321 }, 00:16:33.321 { 00:16:33.321 "name": "pt2", 00:16:33.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.321 "is_configured": true, 00:16:33.321 "data_offset": 256, 00:16:33.321 "data_size": 7936 00:16:33.321 } 00:16:33.321 ] 00:16:33.321 }' 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.321 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.580 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:33.580 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.580 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.580 [2024-11-18 23:11:52.925253] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.580 [2024-11-18 23:11:52.925288] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.580 [2024-11-18 23:11:52.925342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.580 [2024-11-18 23:11:52.925378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.580 [2024-11-18 23:11:52.925388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:33.580 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.580 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.580 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.580 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:33.580 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.580 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.840 [2024-11-18 23:11:52.985133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:33.840 [2024-11-18 23:11:52.985184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.840 [2024-11-18 23:11:52.985202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:33.840 [2024-11-18 23:11:52.985216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.840 [2024-11-18 23:11:52.987076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.840 [2024-11-18 23:11:52.987112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:33.840 [2024-11-18 23:11:52.987153] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:33.840 [2024-11-18 23:11:52.987187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:33.840 [2024-11-18 23:11:52.987275] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:33.840 [2024-11-18 23:11:52.987300] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.840 [2024-11-18 23:11:52.987321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:33.840 [2024-11-18 23:11:52.987357] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.840 [2024-11-18 23:11:52.987445] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:33.840 [2024-11-18 23:11:52.987460] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:33.840 [2024-11-18 23:11:52.987531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:33.840 [2024-11-18 23:11:52.987601] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:33.840 [2024-11-18 23:11:52.987616] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:33.840 [2024-11-18 23:11:52.987682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.840 pt1 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.840 23:11:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.840 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.840 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.840 "name": "raid_bdev1", 00:16:33.840 "uuid": "ad5fcbc9-144a-4a70-a60a-96f8f6098ab4", 00:16:33.840 "strip_size_kb": 0, 00:16:33.840 "state": "online", 00:16:33.840 "raid_level": "raid1", 00:16:33.840 "superblock": true, 00:16:33.840 "num_base_bdevs": 2, 00:16:33.840 "num_base_bdevs_discovered": 1, 00:16:33.840 "num_base_bdevs_operational": 1, 00:16:33.840 "base_bdevs_list": [ 00:16:33.840 { 00:16:33.840 "name": null, 00:16:33.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.840 "is_configured": false, 00:16:33.840 "data_offset": 256, 00:16:33.840 "data_size": 7936 00:16:33.840 }, 00:16:33.840 { 00:16:33.840 "name": "pt2", 00:16:33.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.840 "is_configured": true, 00:16:33.840 "data_offset": 256, 00:16:33.840 "data_size": 7936 00:16:33.840 } 00:16:33.840 ] 00:16:33.840 }' 00:16:33.840 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.840 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.100 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:34.100 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:34.100 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.100 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.100 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:34.360 [2024-11-18 23:11:53.504461] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' ad5fcbc9-144a-4a70-a60a-96f8f6098ab4 '!=' ad5fcbc9-144a-4a70-a60a-96f8f6098ab4 ']' 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 98965 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98965 ']' 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98965 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98965 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:34.360 killing process with pid 98965 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98965' 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 98965 00:16:34.360 [2024-11-18 23:11:53.588082] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.360 [2024-11-18 23:11:53.588152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.360 [2024-11-18 23:11:53.588196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.360 [2024-11-18 23:11:53.588206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:34.360 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 98965 00:16:34.360 [2024-11-18 23:11:53.610873] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.620 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:34.620 00:16:34.620 real 0m4.905s 00:16:34.620 user 0m7.913s 00:16:34.620 sys 0m1.139s 00:16:34.620 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:34.620 23:11:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.620 ************************************ 00:16:34.620 END TEST raid_superblock_test_md_interleaved 00:16:34.620 ************************************ 00:16:34.620 23:11:53 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:34.620 23:11:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:34.620 23:11:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:34.620 23:11:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.620 ************************************ 00:16:34.620 START TEST raid_rebuild_test_sb_md_interleaved 00:16:34.620 ************************************ 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99282 00:16:34.620 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:34.621 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99282 00:16:34.621 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99282 ']' 00:16:34.621 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.621 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.621 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.621 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.621 23:11:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.881 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:34.881 Zero copy mechanism will not be used. 00:16:34.881 [2024-11-18 23:11:54.037874] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:34.881 [2024-11-18 23:11:54.037997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99282 ] 00:16:34.881 [2024-11-18 23:11:54.196879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.881 [2024-11-18 23:11:54.243724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.140 [2024-11-18 23:11:54.286335] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.140 [2024-11-18 23:11:54.286373] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.709 BaseBdev1_malloc 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.709 [2024-11-18 23:11:54.872191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:35.709 [2024-11-18 23:11:54.872255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.709 [2024-11-18 23:11:54.872292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:35.709 [2024-11-18 23:11:54.872302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.709 [2024-11-18 23:11:54.874183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.709 [2024-11-18 23:11:54.874219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:35.709 BaseBdev1 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.709 BaseBdev2_malloc 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.709 [2024-11-18 23:11:54.909552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:35.709 [2024-11-18 23:11:54.909604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.709 [2024-11-18 23:11:54.909622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:35.709 [2024-11-18 23:11:54.909630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.709 [2024-11-18 23:11:54.911485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.709 [2024-11-18 23:11:54.911520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:35.709 BaseBdev2 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.709 spare_malloc 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.709 spare_delay 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.709 [2024-11-18 23:11:54.942164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.709 [2024-11-18 23:11:54.942217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.709 [2024-11-18 23:11:54.942237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:35.709 [2024-11-18 23:11:54.942245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.709 [2024-11-18 23:11:54.944061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.709 [2024-11-18 23:11:54.944095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.709 spare 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.709 [2024-11-18 23:11:54.950174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.709 [2024-11-18 23:11:54.951943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.709 [2024-11-18 23:11:54.952099] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:35.709 [2024-11-18 23:11:54.952113] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:35.709 [2024-11-18 23:11:54.952196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:35.709 [2024-11-18 23:11:54.952261] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:35.709 [2024-11-18 23:11:54.952298] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:35.709 [2024-11-18 23:11:54.952376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.709 23:11:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.709 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.709 "name": "raid_bdev1", 00:16:35.709 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:35.709 "strip_size_kb": 0, 00:16:35.709 "state": "online", 00:16:35.709 "raid_level": "raid1", 00:16:35.710 "superblock": true, 00:16:35.710 "num_base_bdevs": 2, 00:16:35.710 "num_base_bdevs_discovered": 2, 00:16:35.710 "num_base_bdevs_operational": 2, 00:16:35.710 "base_bdevs_list": [ 00:16:35.710 { 00:16:35.710 "name": "BaseBdev1", 00:16:35.710 "uuid": "469f4194-939d-502a-b2dd-6d4b65d902d3", 00:16:35.710 "is_configured": true, 00:16:35.710 "data_offset": 256, 00:16:35.710 "data_size": 7936 00:16:35.710 }, 00:16:35.710 { 00:16:35.710 "name": "BaseBdev2", 00:16:35.710 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:35.710 "is_configured": true, 00:16:35.710 "data_offset": 256, 00:16:35.710 "data_size": 7936 00:16:35.710 } 00:16:35.710 ] 00:16:35.710 }' 00:16:35.710 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.710 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.280 [2024-11-18 23:11:55.413616] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.280 [2024-11-18 23:11:55.489192] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.280 "name": "raid_bdev1", 00:16:36.280 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:36.280 "strip_size_kb": 0, 00:16:36.280 "state": "online", 00:16:36.280 "raid_level": "raid1", 00:16:36.280 "superblock": true, 00:16:36.280 "num_base_bdevs": 2, 00:16:36.280 "num_base_bdevs_discovered": 1, 00:16:36.280 "num_base_bdevs_operational": 1, 00:16:36.280 "base_bdevs_list": [ 00:16:36.280 { 00:16:36.280 "name": null, 00:16:36.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.280 "is_configured": false, 00:16:36.280 "data_offset": 0, 00:16:36.280 "data_size": 7936 00:16:36.280 }, 00:16:36.280 { 00:16:36.280 "name": "BaseBdev2", 00:16:36.280 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:36.280 "is_configured": true, 00:16:36.280 "data_offset": 256, 00:16:36.280 "data_size": 7936 00:16:36.280 } 00:16:36.280 ] 00:16:36.280 }' 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.280 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.540 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:36.540 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.540 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.540 [2024-11-18 23:11:55.904493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.540 [2024-11-18 23:11:55.907359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:36.540 [2024-11-18 23:11:55.909151] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.540 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.540 23:11:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.920 "name": "raid_bdev1", 00:16:37.920 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:37.920 "strip_size_kb": 0, 00:16:37.920 "state": "online", 00:16:37.920 "raid_level": "raid1", 00:16:37.920 "superblock": true, 00:16:37.920 "num_base_bdevs": 2, 00:16:37.920 "num_base_bdevs_discovered": 2, 00:16:37.920 "num_base_bdevs_operational": 2, 00:16:37.920 "process": { 00:16:37.920 "type": "rebuild", 00:16:37.920 "target": "spare", 00:16:37.920 "progress": { 00:16:37.920 "blocks": 2560, 00:16:37.920 "percent": 32 00:16:37.920 } 00:16:37.920 }, 00:16:37.920 "base_bdevs_list": [ 00:16:37.920 { 00:16:37.920 "name": "spare", 00:16:37.920 "uuid": "6e6cecc3-6d97-5cc9-94b7-5aa63ce9204a", 00:16:37.920 "is_configured": true, 00:16:37.920 "data_offset": 256, 00:16:37.920 "data_size": 7936 00:16:37.920 }, 00:16:37.920 { 00:16:37.920 "name": "BaseBdev2", 00:16:37.920 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:37.920 "is_configured": true, 00:16:37.920 "data_offset": 256, 00:16:37.920 "data_size": 7936 00:16:37.920 } 00:16:37.920 ] 00:16:37.920 }' 00:16:37.920 23:11:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.920 [2024-11-18 23:11:57.063838] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.920 [2024-11-18 23:11:57.113771] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.920 [2024-11-18 23:11:57.113823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.920 [2024-11-18 23:11:57.113839] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.920 [2024-11-18 23:11:57.113846] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.920 "name": "raid_bdev1", 00:16:37.920 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:37.920 "strip_size_kb": 0, 00:16:37.920 "state": "online", 00:16:37.920 "raid_level": "raid1", 00:16:37.920 "superblock": true, 00:16:37.920 "num_base_bdevs": 2, 00:16:37.920 "num_base_bdevs_discovered": 1, 00:16:37.920 "num_base_bdevs_operational": 1, 00:16:37.920 "base_bdevs_list": [ 00:16:37.920 { 00:16:37.920 "name": null, 00:16:37.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.920 "is_configured": false, 00:16:37.920 "data_offset": 0, 00:16:37.920 "data_size": 7936 00:16:37.920 }, 00:16:37.920 { 00:16:37.920 "name": "BaseBdev2", 00:16:37.920 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:37.920 "is_configured": true, 00:16:37.920 "data_offset": 256, 00:16:37.920 "data_size": 7936 00:16:37.920 } 00:16:37.920 ] 00:16:37.920 }' 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.920 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.489 "name": "raid_bdev1", 00:16:38.489 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:38.489 "strip_size_kb": 0, 00:16:38.489 "state": "online", 00:16:38.489 "raid_level": "raid1", 00:16:38.489 "superblock": true, 00:16:38.489 "num_base_bdevs": 2, 00:16:38.489 "num_base_bdevs_discovered": 1, 00:16:38.489 "num_base_bdevs_operational": 1, 00:16:38.489 "base_bdevs_list": [ 00:16:38.489 { 00:16:38.489 "name": null, 00:16:38.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.489 "is_configured": false, 00:16:38.489 "data_offset": 0, 00:16:38.489 "data_size": 7936 00:16:38.489 }, 00:16:38.489 { 00:16:38.489 "name": "BaseBdev2", 00:16:38.489 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:38.489 "is_configured": true, 00:16:38.489 "data_offset": 256, 00:16:38.489 "data_size": 7936 00:16:38.489 } 00:16:38.489 ] 00:16:38.489 }' 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.489 [2024-11-18 23:11:57.720519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.489 [2024-11-18 23:11:57.723232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:38.489 [2024-11-18 23:11:57.725084] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.489 23:11:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:39.427 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.427 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.427 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.427 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.427 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.427 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.427 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.427 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.427 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.427 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.427 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.427 "name": "raid_bdev1", 00:16:39.427 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:39.427 "strip_size_kb": 0, 00:16:39.427 "state": "online", 00:16:39.427 "raid_level": "raid1", 00:16:39.427 "superblock": true, 00:16:39.427 "num_base_bdevs": 2, 00:16:39.427 "num_base_bdevs_discovered": 2, 00:16:39.427 "num_base_bdevs_operational": 2, 00:16:39.427 "process": { 00:16:39.427 "type": "rebuild", 00:16:39.427 "target": "spare", 00:16:39.427 "progress": { 00:16:39.427 "blocks": 2560, 00:16:39.427 "percent": 32 00:16:39.427 } 00:16:39.427 }, 00:16:39.427 "base_bdevs_list": [ 00:16:39.427 { 00:16:39.427 "name": "spare", 00:16:39.427 "uuid": "6e6cecc3-6d97-5cc9-94b7-5aa63ce9204a", 00:16:39.427 "is_configured": true, 00:16:39.427 "data_offset": 256, 00:16:39.427 "data_size": 7936 00:16:39.427 }, 00:16:39.427 { 00:16:39.428 "name": "BaseBdev2", 00:16:39.428 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:39.428 "is_configured": true, 00:16:39.428 "data_offset": 256, 00:16:39.428 "data_size": 7936 00:16:39.428 } 00:16:39.428 ] 00:16:39.428 }' 00:16:39.428 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.687 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.687 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.687 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.687 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:39.687 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:39.687 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=614 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.688 "name": "raid_bdev1", 00:16:39.688 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:39.688 "strip_size_kb": 0, 00:16:39.688 "state": "online", 00:16:39.688 "raid_level": "raid1", 00:16:39.688 "superblock": true, 00:16:39.688 "num_base_bdevs": 2, 00:16:39.688 "num_base_bdevs_discovered": 2, 00:16:39.688 "num_base_bdevs_operational": 2, 00:16:39.688 "process": { 00:16:39.688 "type": "rebuild", 00:16:39.688 "target": "spare", 00:16:39.688 "progress": { 00:16:39.688 "blocks": 2816, 00:16:39.688 "percent": 35 00:16:39.688 } 00:16:39.688 }, 00:16:39.688 "base_bdevs_list": [ 00:16:39.688 { 00:16:39.688 "name": "spare", 00:16:39.688 "uuid": "6e6cecc3-6d97-5cc9-94b7-5aa63ce9204a", 00:16:39.688 "is_configured": true, 00:16:39.688 "data_offset": 256, 00:16:39.688 "data_size": 7936 00:16:39.688 }, 00:16:39.688 { 00:16:39.688 "name": "BaseBdev2", 00:16:39.688 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:39.688 "is_configured": true, 00:16:39.688 "data_offset": 256, 00:16:39.688 "data_size": 7936 00:16:39.688 } 00:16:39.688 ] 00:16:39.688 }' 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.688 23:11:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.688 23:11:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.688 23:11:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.080 "name": "raid_bdev1", 00:16:41.080 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:41.080 "strip_size_kb": 0, 00:16:41.080 "state": "online", 00:16:41.080 "raid_level": "raid1", 00:16:41.080 "superblock": true, 00:16:41.080 "num_base_bdevs": 2, 00:16:41.080 "num_base_bdevs_discovered": 2, 00:16:41.080 "num_base_bdevs_operational": 2, 00:16:41.080 "process": { 00:16:41.080 "type": "rebuild", 00:16:41.080 "target": "spare", 00:16:41.080 "progress": { 00:16:41.080 "blocks": 5632, 00:16:41.080 "percent": 70 00:16:41.080 } 00:16:41.080 }, 00:16:41.080 "base_bdevs_list": [ 00:16:41.080 { 00:16:41.080 "name": "spare", 00:16:41.080 "uuid": "6e6cecc3-6d97-5cc9-94b7-5aa63ce9204a", 00:16:41.080 "is_configured": true, 00:16:41.080 "data_offset": 256, 00:16:41.080 "data_size": 7936 00:16:41.080 }, 00:16:41.080 { 00:16:41.080 "name": "BaseBdev2", 00:16:41.080 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:41.080 "is_configured": true, 00:16:41.080 "data_offset": 256, 00:16:41.080 "data_size": 7936 00:16:41.080 } 00:16:41.080 ] 00:16:41.080 }' 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.080 23:12:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.670 [2024-11-18 23:12:00.835567] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:41.670 [2024-11-18 23:12:00.835645] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:41.670 [2024-11-18 23:12:00.835730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.929 "name": "raid_bdev1", 00:16:41.929 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:41.929 "strip_size_kb": 0, 00:16:41.929 "state": "online", 00:16:41.929 "raid_level": "raid1", 00:16:41.929 "superblock": true, 00:16:41.929 "num_base_bdevs": 2, 00:16:41.929 "num_base_bdevs_discovered": 2, 00:16:41.929 "num_base_bdevs_operational": 2, 00:16:41.929 "base_bdevs_list": [ 00:16:41.929 { 00:16:41.929 "name": "spare", 00:16:41.929 "uuid": "6e6cecc3-6d97-5cc9-94b7-5aa63ce9204a", 00:16:41.929 "is_configured": true, 00:16:41.929 "data_offset": 256, 00:16:41.929 "data_size": 7936 00:16:41.929 }, 00:16:41.929 { 00:16:41.929 "name": "BaseBdev2", 00:16:41.929 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:41.929 "is_configured": true, 00:16:41.929 "data_offset": 256, 00:16:41.929 "data_size": 7936 00:16:41.929 } 00:16:41.929 ] 00:16:41.929 }' 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:41.929 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.188 "name": "raid_bdev1", 00:16:42.188 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:42.188 "strip_size_kb": 0, 00:16:42.188 "state": "online", 00:16:42.188 "raid_level": "raid1", 00:16:42.188 "superblock": true, 00:16:42.188 "num_base_bdevs": 2, 00:16:42.188 "num_base_bdevs_discovered": 2, 00:16:42.188 "num_base_bdevs_operational": 2, 00:16:42.188 "base_bdevs_list": [ 00:16:42.188 { 00:16:42.188 "name": "spare", 00:16:42.188 "uuid": "6e6cecc3-6d97-5cc9-94b7-5aa63ce9204a", 00:16:42.188 "is_configured": true, 00:16:42.188 "data_offset": 256, 00:16:42.188 "data_size": 7936 00:16:42.188 }, 00:16:42.188 { 00:16:42.188 "name": "BaseBdev2", 00:16:42.188 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:42.188 "is_configured": true, 00:16:42.188 "data_offset": 256, 00:16:42.188 "data_size": 7936 00:16:42.188 } 00:16:42.188 ] 00:16:42.188 }' 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.188 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.188 "name": "raid_bdev1", 00:16:42.188 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:42.188 "strip_size_kb": 0, 00:16:42.188 "state": "online", 00:16:42.188 "raid_level": "raid1", 00:16:42.188 "superblock": true, 00:16:42.188 "num_base_bdevs": 2, 00:16:42.188 "num_base_bdevs_discovered": 2, 00:16:42.189 "num_base_bdevs_operational": 2, 00:16:42.189 "base_bdevs_list": [ 00:16:42.189 { 00:16:42.189 "name": "spare", 00:16:42.189 "uuid": "6e6cecc3-6d97-5cc9-94b7-5aa63ce9204a", 00:16:42.189 "is_configured": true, 00:16:42.189 "data_offset": 256, 00:16:42.189 "data_size": 7936 00:16:42.189 }, 00:16:42.189 { 00:16:42.189 "name": "BaseBdev2", 00:16:42.189 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:42.189 "is_configured": true, 00:16:42.189 "data_offset": 256, 00:16:42.189 "data_size": 7936 00:16:42.189 } 00:16:42.189 ] 00:16:42.189 }' 00:16:42.189 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.189 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.756 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.757 [2024-11-18 23:12:01.909168] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.757 [2024-11-18 23:12:01.909200] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.757 [2024-11-18 23:12:01.909271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.757 [2024-11-18 23:12:01.909368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.757 [2024-11-18 23:12:01.909391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.757 [2024-11-18 23:12:01.969054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:42.757 [2024-11-18 23:12:01.969109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.757 [2024-11-18 23:12:01.969126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:42.757 [2024-11-18 23:12:01.969136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.757 [2024-11-18 23:12:01.971158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.757 [2024-11-18 23:12:01.971193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:42.757 [2024-11-18 23:12:01.971244] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:42.757 [2024-11-18 23:12:01.971299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.757 [2024-11-18 23:12:01.971410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.757 spare 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.757 23:12:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.757 [2024-11-18 23:12:02.071326] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:42.757 [2024-11-18 23:12:02.071351] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:42.757 [2024-11-18 23:12:02.071435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:42.757 [2024-11-18 23:12:02.071530] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:42.757 [2024-11-18 23:12:02.071542] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:42.757 [2024-11-18 23:12:02.071610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.757 "name": "raid_bdev1", 00:16:42.757 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:42.757 "strip_size_kb": 0, 00:16:42.757 "state": "online", 00:16:42.757 "raid_level": "raid1", 00:16:42.757 "superblock": true, 00:16:42.757 "num_base_bdevs": 2, 00:16:42.757 "num_base_bdevs_discovered": 2, 00:16:42.757 "num_base_bdevs_operational": 2, 00:16:42.757 "base_bdevs_list": [ 00:16:42.757 { 00:16:42.757 "name": "spare", 00:16:42.757 "uuid": "6e6cecc3-6d97-5cc9-94b7-5aa63ce9204a", 00:16:42.757 "is_configured": true, 00:16:42.757 "data_offset": 256, 00:16:42.757 "data_size": 7936 00:16:42.757 }, 00:16:42.757 { 00:16:42.757 "name": "BaseBdev2", 00:16:42.757 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:42.757 "is_configured": true, 00:16:42.757 "data_offset": 256, 00:16:42.757 "data_size": 7936 00:16:42.757 } 00:16:42.757 ] 00:16:42.757 }' 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.757 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.327 "name": "raid_bdev1", 00:16:43.327 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:43.327 "strip_size_kb": 0, 00:16:43.327 "state": "online", 00:16:43.327 "raid_level": "raid1", 00:16:43.327 "superblock": true, 00:16:43.327 "num_base_bdevs": 2, 00:16:43.327 "num_base_bdevs_discovered": 2, 00:16:43.327 "num_base_bdevs_operational": 2, 00:16:43.327 "base_bdevs_list": [ 00:16:43.327 { 00:16:43.327 "name": "spare", 00:16:43.327 "uuid": "6e6cecc3-6d97-5cc9-94b7-5aa63ce9204a", 00:16:43.327 "is_configured": true, 00:16:43.327 "data_offset": 256, 00:16:43.327 "data_size": 7936 00:16:43.327 }, 00:16:43.327 { 00:16:43.327 "name": "BaseBdev2", 00:16:43.327 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:43.327 "is_configured": true, 00:16:43.327 "data_offset": 256, 00:16:43.327 "data_size": 7936 00:16:43.327 } 00:16:43.327 ] 00:16:43.327 }' 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.327 [2024-11-18 23:12:02.687874] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.327 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.587 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.587 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.587 "name": "raid_bdev1", 00:16:43.587 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:43.587 "strip_size_kb": 0, 00:16:43.587 "state": "online", 00:16:43.587 "raid_level": "raid1", 00:16:43.587 "superblock": true, 00:16:43.587 "num_base_bdevs": 2, 00:16:43.587 "num_base_bdevs_discovered": 1, 00:16:43.587 "num_base_bdevs_operational": 1, 00:16:43.587 "base_bdevs_list": [ 00:16:43.587 { 00:16:43.587 "name": null, 00:16:43.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.587 "is_configured": false, 00:16:43.587 "data_offset": 0, 00:16:43.587 "data_size": 7936 00:16:43.587 }, 00:16:43.587 { 00:16:43.587 "name": "BaseBdev2", 00:16:43.587 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:43.587 "is_configured": true, 00:16:43.587 "data_offset": 256, 00:16:43.587 "data_size": 7936 00:16:43.587 } 00:16:43.587 ] 00:16:43.587 }' 00:16:43.587 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.587 23:12:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.847 23:12:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:43.847 23:12:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.847 23:12:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.847 [2024-11-18 23:12:03.111336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.847 [2024-11-18 23:12:03.111584] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:43.847 [2024-11-18 23:12:03.111655] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:43.847 [2024-11-18 23:12:03.111749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.847 [2024-11-18 23:12:03.114425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:43.847 [2024-11-18 23:12:03.116255] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:43.847 23:12:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.847 23:12:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:44.785 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.785 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.785 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.785 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.785 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.785 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.785 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.785 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.785 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.785 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.045 "name": "raid_bdev1", 00:16:45.045 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:45.045 "strip_size_kb": 0, 00:16:45.045 "state": "online", 00:16:45.045 "raid_level": "raid1", 00:16:45.045 "superblock": true, 00:16:45.045 "num_base_bdevs": 2, 00:16:45.045 "num_base_bdevs_discovered": 2, 00:16:45.045 "num_base_bdevs_operational": 2, 00:16:45.045 "process": { 00:16:45.045 "type": "rebuild", 00:16:45.045 "target": "spare", 00:16:45.045 "progress": { 00:16:45.045 "blocks": 2560, 00:16:45.045 "percent": 32 00:16:45.045 } 00:16:45.045 }, 00:16:45.045 "base_bdevs_list": [ 00:16:45.045 { 00:16:45.045 "name": "spare", 00:16:45.045 "uuid": "6e6cecc3-6d97-5cc9-94b7-5aa63ce9204a", 00:16:45.045 "is_configured": true, 00:16:45.045 "data_offset": 256, 00:16:45.045 "data_size": 7936 00:16:45.045 }, 00:16:45.045 { 00:16:45.045 "name": "BaseBdev2", 00:16:45.045 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:45.045 "is_configured": true, 00:16:45.045 "data_offset": 256, 00:16:45.045 "data_size": 7936 00:16:45.045 } 00:16:45.045 ] 00:16:45.045 }' 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.045 [2024-11-18 23:12:04.275484] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.045 [2024-11-18 23:12:04.320198] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:45.045 [2024-11-18 23:12:04.320248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.045 [2024-11-18 23:12:04.320264] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.045 [2024-11-18 23:12:04.320270] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.045 "name": "raid_bdev1", 00:16:45.045 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:45.045 "strip_size_kb": 0, 00:16:45.045 "state": "online", 00:16:45.045 "raid_level": "raid1", 00:16:45.045 "superblock": true, 00:16:45.045 "num_base_bdevs": 2, 00:16:45.045 "num_base_bdevs_discovered": 1, 00:16:45.045 "num_base_bdevs_operational": 1, 00:16:45.045 "base_bdevs_list": [ 00:16:45.045 { 00:16:45.045 "name": null, 00:16:45.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.045 "is_configured": false, 00:16:45.045 "data_offset": 0, 00:16:45.045 "data_size": 7936 00:16:45.045 }, 00:16:45.045 { 00:16:45.045 "name": "BaseBdev2", 00:16:45.045 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:45.045 "is_configured": true, 00:16:45.045 "data_offset": 256, 00:16:45.045 "data_size": 7936 00:16:45.045 } 00:16:45.045 ] 00:16:45.045 }' 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.045 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.614 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.614 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.614 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.614 [2024-11-18 23:12:04.750938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.614 [2024-11-18 23:12:04.751065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.614 [2024-11-18 23:12:04.751123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:45.614 [2024-11-18 23:12:04.751162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.614 [2024-11-18 23:12:04.751399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.614 [2024-11-18 23:12:04.751480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.614 [2024-11-18 23:12:04.751574] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:45.614 [2024-11-18 23:12:04.751618] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:45.614 [2024-11-18 23:12:04.751674] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:45.614 [2024-11-18 23:12:04.751735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.614 [2024-11-18 23:12:04.754083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:45.614 [2024-11-18 23:12:04.755941] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.614 spare 00:16:45.614 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.614 23:12:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.563 "name": "raid_bdev1", 00:16:46.563 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:46.563 "strip_size_kb": 0, 00:16:46.563 "state": "online", 00:16:46.563 "raid_level": "raid1", 00:16:46.563 "superblock": true, 00:16:46.563 "num_base_bdevs": 2, 00:16:46.563 "num_base_bdevs_discovered": 2, 00:16:46.563 "num_base_bdevs_operational": 2, 00:16:46.563 "process": { 00:16:46.563 "type": "rebuild", 00:16:46.563 "target": "spare", 00:16:46.563 "progress": { 00:16:46.563 "blocks": 2560, 00:16:46.563 "percent": 32 00:16:46.563 } 00:16:46.563 }, 00:16:46.563 "base_bdevs_list": [ 00:16:46.563 { 00:16:46.563 "name": "spare", 00:16:46.563 "uuid": "6e6cecc3-6d97-5cc9-94b7-5aa63ce9204a", 00:16:46.563 "is_configured": true, 00:16:46.563 "data_offset": 256, 00:16:46.563 "data_size": 7936 00:16:46.563 }, 00:16:46.563 { 00:16:46.563 "name": "BaseBdev2", 00:16:46.563 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:46.563 "is_configured": true, 00:16:46.563 "data_offset": 256, 00:16:46.563 "data_size": 7936 00:16:46.563 } 00:16:46.563 ] 00:16:46.563 }' 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.563 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.563 [2024-11-18 23:12:05.899062] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.823 [2024-11-18 23:12:05.959830] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:46.823 [2024-11-18 23:12:05.959890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.823 [2024-11-18 23:12:05.959904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.823 [2024-11-18 23:12:05.959913] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.823 23:12:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.823 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.823 "name": "raid_bdev1", 00:16:46.823 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:46.823 "strip_size_kb": 0, 00:16:46.823 "state": "online", 00:16:46.823 "raid_level": "raid1", 00:16:46.823 "superblock": true, 00:16:46.823 "num_base_bdevs": 2, 00:16:46.823 "num_base_bdevs_discovered": 1, 00:16:46.823 "num_base_bdevs_operational": 1, 00:16:46.823 "base_bdevs_list": [ 00:16:46.823 { 00:16:46.823 "name": null, 00:16:46.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.823 "is_configured": false, 00:16:46.823 "data_offset": 0, 00:16:46.823 "data_size": 7936 00:16:46.823 }, 00:16:46.823 { 00:16:46.823 "name": "BaseBdev2", 00:16:46.823 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:46.823 "is_configured": true, 00:16:46.823 "data_offset": 256, 00:16:46.823 "data_size": 7936 00:16:46.823 } 00:16:46.823 ] 00:16:46.823 }' 00:16:46.823 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.823 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.083 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.083 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.083 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.083 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.083 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.083 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.083 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.083 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.083 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.083 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.343 "name": "raid_bdev1", 00:16:47.343 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:47.343 "strip_size_kb": 0, 00:16:47.343 "state": "online", 00:16:47.343 "raid_level": "raid1", 00:16:47.343 "superblock": true, 00:16:47.343 "num_base_bdevs": 2, 00:16:47.343 "num_base_bdevs_discovered": 1, 00:16:47.343 "num_base_bdevs_operational": 1, 00:16:47.343 "base_bdevs_list": [ 00:16:47.343 { 00:16:47.343 "name": null, 00:16:47.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.343 "is_configured": false, 00:16:47.343 "data_offset": 0, 00:16:47.343 "data_size": 7936 00:16:47.343 }, 00:16:47.343 { 00:16:47.343 "name": "BaseBdev2", 00:16:47.343 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:47.343 "is_configured": true, 00:16:47.343 "data_offset": 256, 00:16:47.343 "data_size": 7936 00:16:47.343 } 00:16:47.343 ] 00:16:47.343 }' 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.343 [2024-11-18 23:12:06.578189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:47.343 [2024-11-18 23:12:06.578248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.343 [2024-11-18 23:12:06.578265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:47.343 [2024-11-18 23:12:06.578275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.343 [2024-11-18 23:12:06.578430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.343 [2024-11-18 23:12:06.578445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:47.343 [2024-11-18 23:12:06.578504] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:47.343 [2024-11-18 23:12:06.578528] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:47.343 [2024-11-18 23:12:06.578535] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:47.343 [2024-11-18 23:12:06.578549] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:47.343 BaseBdev1 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.343 23:12:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.289 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.289 "name": "raid_bdev1", 00:16:48.289 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:48.289 "strip_size_kb": 0, 00:16:48.289 "state": "online", 00:16:48.289 "raid_level": "raid1", 00:16:48.289 "superblock": true, 00:16:48.289 "num_base_bdevs": 2, 00:16:48.289 "num_base_bdevs_discovered": 1, 00:16:48.289 "num_base_bdevs_operational": 1, 00:16:48.289 "base_bdevs_list": [ 00:16:48.289 { 00:16:48.289 "name": null, 00:16:48.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.289 "is_configured": false, 00:16:48.289 "data_offset": 0, 00:16:48.289 "data_size": 7936 00:16:48.290 }, 00:16:48.290 { 00:16:48.290 "name": "BaseBdev2", 00:16:48.290 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:48.290 "is_configured": true, 00:16:48.290 "data_offset": 256, 00:16:48.290 "data_size": 7936 00:16:48.290 } 00:16:48.290 ] 00:16:48.290 }' 00:16:48.290 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.290 23:12:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.857 "name": "raid_bdev1", 00:16:48.857 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:48.857 "strip_size_kb": 0, 00:16:48.857 "state": "online", 00:16:48.857 "raid_level": "raid1", 00:16:48.857 "superblock": true, 00:16:48.857 "num_base_bdevs": 2, 00:16:48.857 "num_base_bdevs_discovered": 1, 00:16:48.857 "num_base_bdevs_operational": 1, 00:16:48.857 "base_bdevs_list": [ 00:16:48.857 { 00:16:48.857 "name": null, 00:16:48.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.857 "is_configured": false, 00:16:48.857 "data_offset": 0, 00:16:48.857 "data_size": 7936 00:16:48.857 }, 00:16:48.857 { 00:16:48.857 "name": "BaseBdev2", 00:16:48.857 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:48.857 "is_configured": true, 00:16:48.857 "data_offset": 256, 00:16:48.857 "data_size": 7936 00:16:48.857 } 00:16:48.857 ] 00:16:48.857 }' 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.857 [2024-11-18 23:12:08.183519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.857 [2024-11-18 23:12:08.183730] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:48.857 [2024-11-18 23:12:08.183756] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:48.857 request: 00:16:48.857 { 00:16:48.857 "base_bdev": "BaseBdev1", 00:16:48.857 "raid_bdev": "raid_bdev1", 00:16:48.857 "method": "bdev_raid_add_base_bdev", 00:16:48.857 "req_id": 1 00:16:48.857 } 00:16:48.857 Got JSON-RPC error response 00:16:48.857 response: 00:16:48.857 { 00:16:48.857 "code": -22, 00:16:48.857 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:48.857 } 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:48.857 23:12:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:50.249 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.249 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.249 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.249 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.249 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.250 "name": "raid_bdev1", 00:16:50.250 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:50.250 "strip_size_kb": 0, 00:16:50.250 "state": "online", 00:16:50.250 "raid_level": "raid1", 00:16:50.250 "superblock": true, 00:16:50.250 "num_base_bdevs": 2, 00:16:50.250 "num_base_bdevs_discovered": 1, 00:16:50.250 "num_base_bdevs_operational": 1, 00:16:50.250 "base_bdevs_list": [ 00:16:50.250 { 00:16:50.250 "name": null, 00:16:50.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.250 "is_configured": false, 00:16:50.250 "data_offset": 0, 00:16:50.250 "data_size": 7936 00:16:50.250 }, 00:16:50.250 { 00:16:50.250 "name": "BaseBdev2", 00:16:50.250 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:50.250 "is_configured": true, 00:16:50.250 "data_offset": 256, 00:16:50.250 "data_size": 7936 00:16:50.250 } 00:16:50.250 ] 00:16:50.250 }' 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.250 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.513 "name": "raid_bdev1", 00:16:50.513 "uuid": "c67e2f78-cd24-43ac-a138-fa4ee12a70c4", 00:16:50.513 "strip_size_kb": 0, 00:16:50.513 "state": "online", 00:16:50.513 "raid_level": "raid1", 00:16:50.513 "superblock": true, 00:16:50.513 "num_base_bdevs": 2, 00:16:50.513 "num_base_bdevs_discovered": 1, 00:16:50.513 "num_base_bdevs_operational": 1, 00:16:50.513 "base_bdevs_list": [ 00:16:50.513 { 00:16:50.513 "name": null, 00:16:50.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.513 "is_configured": false, 00:16:50.513 "data_offset": 0, 00:16:50.513 "data_size": 7936 00:16:50.513 }, 00:16:50.513 { 00:16:50.513 "name": "BaseBdev2", 00:16:50.513 "uuid": "28d86d07-f393-59ce-8240-eca2fd8df2fa", 00:16:50.513 "is_configured": true, 00:16:50.513 "data_offset": 256, 00:16:50.513 "data_size": 7936 00:16:50.513 } 00:16:50.513 ] 00:16:50.513 }' 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99282 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99282 ']' 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99282 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99282 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99282' 00:16:50.513 killing process with pid 99282 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99282 00:16:50.513 Received shutdown signal, test time was about 60.000000 seconds 00:16:50.513 00:16:50.513 Latency(us) 00:16:50.513 [2024-11-18T23:12:09.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.513 [2024-11-18T23:12:09.891Z] =================================================================================================================== 00:16:50.513 [2024-11-18T23:12:09.891Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:50.513 [2024-11-18 23:12:09.789363] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.513 [2024-11-18 23:12:09.789472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.513 [2024-11-18 23:12:09.789516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.513 [2024-11-18 23:12:09.789524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:50.513 23:12:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99282 00:16:50.513 [2024-11-18 23:12:09.821664] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.773 ************************************ 00:16:50.773 END TEST raid_rebuild_test_sb_md_interleaved 00:16:50.773 ************************************ 00:16:50.773 23:12:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:50.773 00:16:50.773 real 0m16.109s 00:16:50.773 user 0m21.507s 00:16:50.773 sys 0m1.706s 00:16:50.773 23:12:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.773 23:12:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.773 23:12:10 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:50.773 23:12:10 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:50.773 23:12:10 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99282 ']' 00:16:50.773 23:12:10 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99282 00:16:50.773 23:12:10 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:51.033 00:16:51.033 real 9m55.378s 00:16:51.033 user 14m5.446s 00:16:51.033 sys 1m49.380s 00:16:51.033 ************************************ 00:16:51.033 END TEST bdev_raid 00:16:51.033 ************************************ 00:16:51.033 23:12:10 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:51.033 23:12:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.033 23:12:10 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:51.033 23:12:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:51.033 23:12:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.033 23:12:10 -- common/autotest_common.sh@10 -- # set +x 00:16:51.033 ************************************ 00:16:51.033 START TEST spdkcli_raid 00:16:51.033 ************************************ 00:16:51.033 23:12:10 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:51.033 * Looking for test storage... 00:16:51.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:51.033 23:12:10 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:51.033 23:12:10 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:51.033 23:12:10 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.293 23:12:10 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:51.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.293 --rc genhtml_branch_coverage=1 00:16:51.293 --rc genhtml_function_coverage=1 00:16:51.293 --rc genhtml_legend=1 00:16:51.293 --rc geninfo_all_blocks=1 00:16:51.293 --rc geninfo_unexecuted_blocks=1 00:16:51.293 00:16:51.293 ' 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:51.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.293 --rc genhtml_branch_coverage=1 00:16:51.293 --rc genhtml_function_coverage=1 00:16:51.293 --rc genhtml_legend=1 00:16:51.293 --rc geninfo_all_blocks=1 00:16:51.293 --rc geninfo_unexecuted_blocks=1 00:16:51.293 00:16:51.293 ' 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:51.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.293 --rc genhtml_branch_coverage=1 00:16:51.293 --rc genhtml_function_coverage=1 00:16:51.293 --rc genhtml_legend=1 00:16:51.293 --rc geninfo_all_blocks=1 00:16:51.293 --rc geninfo_unexecuted_blocks=1 00:16:51.293 00:16:51.293 ' 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:51.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.293 --rc genhtml_branch_coverage=1 00:16:51.293 --rc genhtml_function_coverage=1 00:16:51.293 --rc genhtml_legend=1 00:16:51.293 --rc geninfo_all_blocks=1 00:16:51.293 --rc geninfo_unexecuted_blocks=1 00:16:51.293 00:16:51.293 ' 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:51.293 23:12:10 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=99948 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:51.293 23:12:10 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 99948 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 99948 ']' 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.293 23:12:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.293 [2024-11-18 23:12:10.590666] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:51.293 [2024-11-18 23:12:10.591398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99948 ] 00:16:51.554 [2024-11-18 23:12:10.756811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:51.554 [2024-11-18 23:12:10.805407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.554 [2024-11-18 23:12:10.805506] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.123 23:12:11 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.123 23:12:11 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:16:52.123 23:12:11 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:52.123 23:12:11 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.123 23:12:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.123 23:12:11 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:52.123 23:12:11 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:52.123 23:12:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.123 23:12:11 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:52.123 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:52.123 ' 00:16:54.031 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:54.031 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:54.031 23:12:13 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:54.031 23:12:13 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:54.031 23:12:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.031 23:12:13 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:54.031 23:12:13 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:54.031 23:12:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.032 23:12:13 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:54.032 ' 00:16:54.970 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:54.970 23:12:14 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:54.970 23:12:14 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:54.970 23:12:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.970 23:12:14 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:54.970 23:12:14 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:54.970 23:12:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.970 23:12:14 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:54.970 23:12:14 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:55.539 23:12:14 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:55.539 23:12:14 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:55.539 23:12:14 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:55.539 23:12:14 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:55.539 23:12:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.800 23:12:14 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:55.800 23:12:14 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:55.800 23:12:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.800 23:12:14 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:55.800 ' 00:16:56.738 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:56.738 23:12:15 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:56.738 23:12:15 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:56.738 23:12:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.738 23:12:16 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:56.738 23:12:16 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:56.738 23:12:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.738 23:12:16 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:56.738 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:56.738 ' 00:16:58.119 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:58.119 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:58.119 23:12:17 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:58.119 23:12:17 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:58.119 23:12:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:58.378 23:12:17 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 99948 00:16:58.378 23:12:17 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99948 ']' 00:16:58.378 23:12:17 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99948 00:16:58.378 23:12:17 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:16:58.378 23:12:17 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.378 23:12:17 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99948 00:16:58.378 killing process with pid 99948 00:16:58.378 23:12:17 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:58.378 23:12:17 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:58.378 23:12:17 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99948' 00:16:58.378 23:12:17 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 99948 00:16:58.378 23:12:17 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 99948 00:16:58.638 Process with pid 99948 is not found 00:16:58.638 23:12:17 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:58.638 23:12:17 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 99948 ']' 00:16:58.638 23:12:17 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 99948 00:16:58.638 23:12:17 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99948 ']' 00:16:58.638 23:12:17 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99948 00:16:58.638 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (99948) - No such process 00:16:58.638 23:12:17 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 99948 is not found' 00:16:58.638 23:12:17 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:58.638 23:12:17 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:58.638 23:12:17 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:58.638 23:12:17 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:58.638 ************************************ 00:16:58.638 END TEST spdkcli_raid 00:16:58.638 ************************************ 00:16:58.638 00:16:58.638 real 0m7.747s 00:16:58.638 user 0m16.289s 00:16:58.638 sys 0m1.133s 00:16:58.638 23:12:17 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:58.638 23:12:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:58.921 23:12:18 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:58.921 23:12:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:58.921 23:12:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:58.921 23:12:18 -- common/autotest_common.sh@10 -- # set +x 00:16:58.921 ************************************ 00:16:58.921 START TEST blockdev_raid5f 00:16:58.921 ************************************ 00:16:58.921 23:12:18 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:58.921 * Looking for test storage... 00:16:58.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:58.921 23:12:18 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:58.921 23:12:18 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:16:58.921 23:12:18 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:58.921 23:12:18 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.921 23:12:18 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:58.921 23:12:18 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.921 23:12:18 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.921 --rc genhtml_branch_coverage=1 00:16:58.921 --rc genhtml_function_coverage=1 00:16:58.921 --rc genhtml_legend=1 00:16:58.921 --rc geninfo_all_blocks=1 00:16:58.921 --rc geninfo_unexecuted_blocks=1 00:16:58.921 00:16:58.921 ' 00:16:58.921 23:12:18 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.921 --rc genhtml_branch_coverage=1 00:16:58.921 --rc genhtml_function_coverage=1 00:16:58.921 --rc genhtml_legend=1 00:16:58.921 --rc geninfo_all_blocks=1 00:16:58.921 --rc geninfo_unexecuted_blocks=1 00:16:58.921 00:16:58.921 ' 00:16:58.921 23:12:18 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.921 --rc genhtml_branch_coverage=1 00:16:58.921 --rc genhtml_function_coverage=1 00:16:58.921 --rc genhtml_legend=1 00:16:58.921 --rc geninfo_all_blocks=1 00:16:58.921 --rc geninfo_unexecuted_blocks=1 00:16:58.921 00:16:58.921 ' 00:16:58.921 23:12:18 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.921 --rc genhtml_branch_coverage=1 00:16:58.921 --rc genhtml_function_coverage=1 00:16:58.921 --rc genhtml_legend=1 00:16:58.921 --rc geninfo_all_blocks=1 00:16:58.921 --rc geninfo_unexecuted_blocks=1 00:16:58.921 00:16:58.921 ' 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:58.921 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100206 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100206 00:16:59.232 23:12:18 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:59.233 23:12:18 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100206 ']' 00:16:59.233 23:12:18 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.233 23:12:18 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:59.233 23:12:18 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.233 23:12:18 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:59.233 23:12:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:59.233 [2024-11-18 23:12:18.395554] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:59.233 [2024-11-18 23:12:18.395777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100206 ] 00:16:59.233 [2024-11-18 23:12:18.560544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.233 [2024-11-18 23:12:18.606174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.171 23:12:19 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.172 Malloc0 00:17:00.172 Malloc1 00:17:00.172 Malloc2 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "502ea55d-4d05-45d4-85c2-8c16985484d9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "502ea55d-4d05-45d4-85c2-8c16985484d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "502ea55d-4d05-45d4-85c2-8c16985484d9",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c0b0fb4c-1b57-45cd-9667-345c56ad0e1c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "b5c3c167-1f7d-4b17-bc25-5cfa53af33ef",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f2f44a33-4a32-471d-aa94-63c6ae896a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:00.172 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100206 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100206 ']' 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100206 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100206 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.172 killing process with pid 100206 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100206' 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100206 00:17:00.172 23:12:19 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100206 00:17:00.742 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:00.742 23:12:19 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:00.742 23:12:19 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:00.742 23:12:19 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.742 23:12:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.742 ************************************ 00:17:00.742 START TEST bdev_hello_world 00:17:00.742 ************************************ 00:17:00.742 23:12:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:00.742 [2024-11-18 23:12:19.956150] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:00.742 [2024-11-18 23:12:19.956362] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100245 ] 00:17:01.002 [2024-11-18 23:12:20.119317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.002 [2024-11-18 23:12:20.168297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.002 [2024-11-18 23:12:20.361723] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:01.002 [2024-11-18 23:12:20.361772] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:01.002 [2024-11-18 23:12:20.361789] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:01.002 [2024-11-18 23:12:20.362170] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:01.002 [2024-11-18 23:12:20.362341] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:01.002 [2024-11-18 23:12:20.362361] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:01.002 [2024-11-18 23:12:20.362412] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:01.002 00:17:01.002 [2024-11-18 23:12:20.362431] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:01.262 00:17:01.262 real 0m0.739s 00:17:01.262 user 0m0.380s 00:17:01.262 sys 0m0.243s 00:17:01.262 23:12:20 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.262 ************************************ 00:17:01.262 END TEST bdev_hello_world 00:17:01.262 ************************************ 00:17:01.262 23:12:20 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:01.521 23:12:20 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:01.521 23:12:20 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:01.521 23:12:20 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.521 23:12:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:01.521 ************************************ 00:17:01.521 START TEST bdev_bounds 00:17:01.521 ************************************ 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100279 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:01.521 Process bdevio pid: 100279 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100279' 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100279 00:17:01.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100279 ']' 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.521 23:12:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:01.521 [2024-11-18 23:12:20.775655] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:01.521 [2024-11-18 23:12:20.775859] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100279 ] 00:17:01.781 [2024-11-18 23:12:20.935531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.781 [2024-11-18 23:12:20.983757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.781 [2024-11-18 23:12:20.983874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.781 [2024-11-18 23:12:20.983973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.350 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.350 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:17:02.350 23:12:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:02.350 I/O targets: 00:17:02.350 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:02.350 00:17:02.350 00:17:02.350 CUnit - A unit testing framework for C - Version 2.1-3 00:17:02.350 http://cunit.sourceforge.net/ 00:17:02.350 00:17:02.350 00:17:02.350 Suite: bdevio tests on: raid5f 00:17:02.350 Test: blockdev write read block ...passed 00:17:02.350 Test: blockdev write zeroes read block ...passed 00:17:02.350 Test: blockdev write zeroes read no split ...passed 00:17:02.610 Test: blockdev write zeroes read split ...passed 00:17:02.610 Test: blockdev write zeroes read split partial ...passed 00:17:02.610 Test: blockdev reset ...passed 00:17:02.610 Test: blockdev write read 8 blocks ...passed 00:17:02.610 Test: blockdev write read size > 128k ...passed 00:17:02.610 Test: blockdev write read invalid size ...passed 00:17:02.610 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:02.610 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:02.610 Test: blockdev write read max offset ...passed 00:17:02.610 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:02.610 Test: blockdev writev readv 8 blocks ...passed 00:17:02.610 Test: blockdev writev readv 30 x 1block ...passed 00:17:02.610 Test: blockdev writev readv block ...passed 00:17:02.610 Test: blockdev writev readv size > 128k ...passed 00:17:02.610 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:02.610 Test: blockdev comparev and writev ...passed 00:17:02.610 Test: blockdev nvme passthru rw ...passed 00:17:02.610 Test: blockdev nvme passthru vendor specific ...passed 00:17:02.610 Test: blockdev nvme admin passthru ...passed 00:17:02.610 Test: blockdev copy ...passed 00:17:02.610 00:17:02.610 Run Summary: Type Total Ran Passed Failed Inactive 00:17:02.610 suites 1 1 n/a 0 0 00:17:02.610 tests 23 23 23 0 0 00:17:02.610 asserts 130 130 130 0 n/a 00:17:02.610 00:17:02.610 Elapsed time = 0.316 seconds 00:17:02.610 0 00:17:02.610 23:12:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100279 00:17:02.610 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100279 ']' 00:17:02.610 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100279 00:17:02.610 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:17:02.610 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.610 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100279 00:17:02.610 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:02.610 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:02.610 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100279' 00:17:02.610 killing process with pid 100279 00:17:02.610 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100279 00:17:02.610 23:12:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100279 00:17:02.871 23:12:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:02.871 00:17:02.871 real 0m1.467s 00:17:02.871 user 0m3.495s 00:17:02.871 sys 0m0.352s 00:17:02.871 23:12:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:02.871 23:12:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:02.871 ************************************ 00:17:02.871 END TEST bdev_bounds 00:17:02.871 ************************************ 00:17:02.871 23:12:22 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:02.871 23:12:22 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:02.871 23:12:22 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:02.871 23:12:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:02.871 ************************************ 00:17:02.871 START TEST bdev_nbd 00:17:02.871 ************************************ 00:17:02.871 23:12:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:02.871 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:02.871 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:02.871 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:02.871 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:02.871 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:02.871 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:02.871 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:02.871 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:02.871 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:02.871 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100323 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100323 /var/tmp/spdk-nbd.sock 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100323 ']' 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:03.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.130 23:12:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:03.130 [2024-11-18 23:12:22.332544] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:03.130 [2024-11-18 23:12:22.332757] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.130 [2024-11-18 23:12:22.492423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.390 [2024-11-18 23:12:22.539055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:03.959 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.219 1+0 records in 00:17:04.219 1+0 records out 00:17:04.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039167 s, 10.5 MB/s 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:04.219 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:04.479 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:04.479 { 00:17:04.479 "nbd_device": "/dev/nbd0", 00:17:04.479 "bdev_name": "raid5f" 00:17:04.479 } 00:17:04.479 ]' 00:17:04.479 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:04.479 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:04.479 { 00:17:04.479 "nbd_device": "/dev/nbd0", 00:17:04.479 "bdev_name": "raid5f" 00:17:04.479 } 00:17:04.479 ]' 00:17:04.479 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:04.479 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:04.479 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.479 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:04.479 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:04.479 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:04.479 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.479 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:04.739 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:04.739 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:04.739 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:04.739 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.739 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.739 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:04.739 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:04.739 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.739 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:04.739 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.739 23:12:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:04.739 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:04.739 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:04.739 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:04.998 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:04.998 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:04.998 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:04.998 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:04.998 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:04.998 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:04.998 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:04.998 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:04.999 /dev/nbd0 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:04.999 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.259 1+0 records in 00:17:05.259 1+0 records out 00:17:05.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582628 s, 7.0 MB/s 00:17:05.259 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.259 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:05.259 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.259 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:05.259 23:12:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:05.259 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.259 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:05.259 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:05.259 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.259 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:05.518 { 00:17:05.518 "nbd_device": "/dev/nbd0", 00:17:05.518 "bdev_name": "raid5f" 00:17:05.518 } 00:17:05.518 ]' 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:05.518 { 00:17:05.518 "nbd_device": "/dev/nbd0", 00:17:05.518 "bdev_name": "raid5f" 00:17:05.518 } 00:17:05.518 ]' 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:05.518 256+0 records in 00:17:05.518 256+0 records out 00:17:05.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146562 s, 71.5 MB/s 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:05.518 256+0 records in 00:17:05.518 256+0 records out 00:17:05.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307581 s, 34.1 MB/s 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:05.518 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:05.519 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:05.519 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:05.519 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:05.519 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.519 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:05.519 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.519 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:05.519 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.519 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:05.778 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.778 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.778 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.778 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.778 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.778 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.778 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:05.778 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.778 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:05.778 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.778 23:12:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:06.045 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:06.305 malloc_lvol_verify 00:17:06.305 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:06.305 816bd96c-b35f-4bea-80a6-f1408e354e23 00:17:06.305 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:06.565 57a6b8b8-743b-4c94-90cf-b3aab7e453cd 00:17:06.565 23:12:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:06.826 /dev/nbd0 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:06.826 mke2fs 1.47.0 (5-Feb-2023) 00:17:06.826 Discarding device blocks: 0/4096 done 00:17:06.826 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:06.826 00:17:06.826 Allocating group tables: 0/1 done 00:17:06.826 Writing inode tables: 0/1 done 00:17:06.826 Creating journal (1024 blocks): done 00:17:06.826 Writing superblocks and filesystem accounting information: 0/1 done 00:17:06.826 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:06.826 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:07.086 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:07.086 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:07.086 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:07.086 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.086 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.086 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:07.086 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:07.086 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.087 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100323 00:17:07.087 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100323 ']' 00:17:07.087 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100323 00:17:07.087 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:17:07.087 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:07.087 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100323 00:17:07.087 killing process with pid 100323 00:17:07.087 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:07.087 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:07.087 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100323' 00:17:07.087 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100323 00:17:07.087 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100323 00:17:07.657 23:12:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:07.657 00:17:07.657 real 0m4.540s 00:17:07.657 user 0m6.516s 00:17:07.657 sys 0m1.260s 00:17:07.657 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.657 ************************************ 00:17:07.657 END TEST bdev_nbd 00:17:07.657 ************************************ 00:17:07.657 23:12:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:07.657 23:12:26 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:07.657 23:12:26 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:07.657 23:12:26 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:07.657 23:12:26 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:07.657 23:12:26 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:07.657 23:12:26 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:07.657 23:12:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:07.657 ************************************ 00:17:07.657 START TEST bdev_fio 00:17:07.657 ************************************ 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:07.657 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:07.657 23:12:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:07.657 ************************************ 00:17:07.657 START TEST bdev_fio_rw_verify 00:17:07.657 ************************************ 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:07.657 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:07.917 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:07.917 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:07.917 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:07.917 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:07.917 23:12:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:07.917 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:07.917 fio-3.35 00:17:07.917 Starting 1 thread 00:17:20.151 00:17:20.151 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100517: Mon Nov 18 23:12:37 2024 00:17:20.151 read: IOPS=13.2k, BW=51.5MiB/s (54.0MB/s)(515MiB/10001msec) 00:17:20.151 slat (nsec): min=16648, max=52955, avg=17861.89, stdev=1300.67 00:17:20.151 clat (usec): min=9, max=267, avg=120.62, stdev=42.10 00:17:20.152 lat (usec): min=27, max=285, avg=138.49, stdev=42.20 00:17:20.152 clat percentiles (usec): 00:17:20.152 | 50.000th=[ 126], 99.000th=[ 192], 99.900th=[ 217], 99.990th=[ 249], 00:17:20.152 | 99.999th=[ 269] 00:17:20.152 write: IOPS=13.8k, BW=53.9MiB/s (56.6MB/s)(532MiB/9865msec); 0 zone resets 00:17:20.152 slat (usec): min=7, max=374, avg=15.56, stdev= 3.56 00:17:20.152 clat (usec): min=53, max=1740, avg=278.27, stdev=38.43 00:17:20.152 lat (usec): min=67, max=2114, avg=293.84, stdev=39.39 00:17:20.152 clat percentiles (usec): 00:17:20.152 | 50.000th=[ 281], 99.000th=[ 347], 99.900th=[ 570], 99.990th=[ 1012], 00:17:20.152 | 99.999th=[ 1647] 00:17:20.152 bw ( KiB/s): min=51184, max=57712, per=98.72%, avg=54529.79, stdev=1735.43, samples=19 00:17:20.152 iops : min=12796, max=14428, avg=13632.42, stdev=433.86, samples=19 00:17:20.152 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=18.17%, 250=42.87% 00:17:20.152 lat (usec) : 500=38.89%, 750=0.04%, 1000=0.02% 00:17:20.152 lat (msec) : 2=0.01% 00:17:20.152 cpu : usr=98.92%, sys=0.51%, ctx=20, majf=0, minf=13775 00:17:20.152 IO depths : 1=7.6%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.152 complete : 0=0.0%, 4=89.9%, 8=10.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.152 issued rwts: total=131768,136232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.152 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:20.152 00:17:20.152 Run status group 0 (all jobs): 00:17:20.152 READ: bw=51.5MiB/s (54.0MB/s), 51.5MiB/s-51.5MiB/s (54.0MB/s-54.0MB/s), io=515MiB (540MB), run=10001-10001msec 00:17:20.152 WRITE: bw=53.9MiB/s (56.6MB/s), 53.9MiB/s-53.9MiB/s (56.6MB/s-56.6MB/s), io=532MiB (558MB), run=9865-9865msec 00:17:20.152 ----------------------------------------------------- 00:17:20.152 Suppressions used: 00:17:20.152 count bytes template 00:17:20.152 1 7 /usr/src/fio/parse.c 00:17:20.152 364 34944 /usr/src/fio/iolog.c 00:17:20.152 1 8 libtcmalloc_minimal.so 00:17:20.152 1 904 libcrypto.so 00:17:20.152 ----------------------------------------------------- 00:17:20.152 00:17:20.152 00:17:20.152 real 0m11.405s 00:17:20.152 user 0m11.489s 00:17:20.152 sys 0m0.712s 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:20.152 ************************************ 00:17:20.152 END TEST bdev_fio_rw_verify 00:17:20.152 ************************************ 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "502ea55d-4d05-45d4-85c2-8c16985484d9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "502ea55d-4d05-45d4-85c2-8c16985484d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "502ea55d-4d05-45d4-85c2-8c16985484d9",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c0b0fb4c-1b57-45cd-9667-345c56ad0e1c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "b5c3c167-1f7d-4b17-bc25-5cfa53af33ef",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f2f44a33-4a32-471d-aa94-63c6ae896a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:20.152 /home/vagrant/spdk_repo/spdk 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:20.152 ************************************ 00:17:20.152 END TEST bdev_fio 00:17:20.152 ************************************ 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:20.152 00:17:20.152 real 0m11.702s 00:17:20.152 user 0m11.617s 00:17:20.152 sys 0m0.849s 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.152 23:12:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:20.152 23:12:38 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:20.152 23:12:38 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:20.152 23:12:38 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:20.152 23:12:38 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:20.152 23:12:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:20.152 ************************************ 00:17:20.152 START TEST bdev_verify 00:17:20.152 ************************************ 00:17:20.152 23:12:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:20.152 [2024-11-18 23:12:38.721082] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:20.152 [2024-11-18 23:12:38.721205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100664 ] 00:17:20.152 [2024-11-18 23:12:38.888523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:20.152 [2024-11-18 23:12:38.971334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.152 [2024-11-18 23:12:38.971390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.152 Running I/O for 5 seconds... 00:17:22.029 11175.00 IOPS, 43.65 MiB/s [2024-11-18T23:12:42.345Z] 11290.00 IOPS, 44.10 MiB/s [2024-11-18T23:12:43.285Z] 11296.00 IOPS, 44.12 MiB/s [2024-11-18T23:12:44.680Z] 11297.00 IOPS, 44.13 MiB/s [2024-11-18T23:12:44.680Z] 11276.60 IOPS, 44.05 MiB/s 00:17:25.302 Latency(us) 00:17:25.302 [2024-11-18T23:12:44.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.302 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:25.302 Verification LBA range: start 0x0 length 0x2000 00:17:25.302 raid5f : 5.01 6786.27 26.51 0.00 0.00 28346.22 232.52 20719.68 00:17:25.302 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:25.302 Verification LBA range: start 0x2000 length 0x2000 00:17:25.302 raid5f : 5.02 4505.97 17.60 0.00 0.00 42525.05 232.52 30907.81 00:17:25.302 [2024-11-18T23:12:44.680Z] =================================================================================================================== 00:17:25.302 [2024-11-18T23:12:44.680Z] Total : 11292.23 44.11 0.00 0.00 34007.09 232.52 30907.81 00:17:25.302 00:17:25.302 real 0m6.041s 00:17:25.302 user 0m11.027s 00:17:25.302 sys 0m0.368s 00:17:25.302 ************************************ 00:17:25.302 END TEST bdev_verify 00:17:25.302 ************************************ 00:17:25.302 23:12:44 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.302 23:12:44 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:25.579 23:12:44 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:25.579 23:12:44 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:25.579 23:12:44 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.579 23:12:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:25.579 ************************************ 00:17:25.579 START TEST bdev_verify_big_io 00:17:25.579 ************************************ 00:17:25.579 23:12:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:25.579 [2024-11-18 23:12:44.834393] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:25.579 [2024-11-18 23:12:44.834528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100751 ] 00:17:25.849 [2024-11-18 23:12:45.001153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:25.849 [2024-11-18 23:12:45.084708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.849 [2024-11-18 23:12:45.084749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.109 Running I/O for 5 seconds... 00:17:27.995 633.00 IOPS, 39.56 MiB/s [2024-11-18T23:12:48.754Z] 761.00 IOPS, 47.56 MiB/s [2024-11-18T23:12:49.692Z] 782.00 IOPS, 48.88 MiB/s [2024-11-18T23:12:50.629Z] 792.75 IOPS, 49.55 MiB/s [2024-11-18T23:12:50.629Z] 799.40 IOPS, 49.96 MiB/s 00:17:31.251 Latency(us) 00:17:31.251 [2024-11-18T23:12:50.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.251 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:31.251 Verification LBA range: start 0x0 length 0x200 00:17:31.251 raid5f : 5.16 467.62 29.23 0.00 0.00 6836671.56 268.30 298546.53 00:17:31.251 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:31.251 Verification LBA range: start 0x200 length 0x200 00:17:31.251 raid5f : 5.28 360.52 22.53 0.00 0.00 8761774.33 220.90 373641.06 00:17:31.251 [2024-11-18T23:12:50.629Z] =================================================================================================================== 00:17:31.251 [2024-11-18T23:12:50.629Z] Total : 828.14 51.76 0.00 0.00 7685483.09 220.90 373641.06 00:17:31.821 00:17:31.821 real 0m6.287s 00:17:31.821 user 0m11.531s 00:17:31.821 sys 0m0.363s 00:17:31.821 23:12:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:31.821 23:12:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.821 ************************************ 00:17:31.821 END TEST bdev_verify_big_io 00:17:31.821 ************************************ 00:17:31.821 23:12:51 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:31.821 23:12:51 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:31.821 23:12:51 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:31.821 23:12:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:31.821 ************************************ 00:17:31.821 START TEST bdev_write_zeroes 00:17:31.821 ************************************ 00:17:31.821 23:12:51 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:31.821 [2024-11-18 23:12:51.192879] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:31.821 [2024-11-18 23:12:51.193065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100837 ] 00:17:32.081 [2024-11-18 23:12:51.361385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.081 [2024-11-18 23:12:51.441854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.342 Running I/O for 1 seconds... 00:17:33.726 30111.00 IOPS, 117.62 MiB/s 00:17:33.726 Latency(us) 00:17:33.726 [2024-11-18T23:12:53.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.726 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:33.726 raid5f : 1.01 30071.20 117.47 0.00 0.00 4244.16 1459.54 8928.92 00:17:33.726 [2024-11-18T23:12:53.104Z] =================================================================================================================== 00:17:33.726 [2024-11-18T23:12:53.104Z] Total : 30071.20 117.47 0.00 0.00 4244.16 1459.54 8928.92 00:17:33.987 00:17:33.987 real 0m2.021s 00:17:33.987 user 0m1.547s 00:17:33.987 sys 0m0.344s 00:17:33.987 23:12:53 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:33.987 23:12:53 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:33.987 ************************************ 00:17:33.987 END TEST bdev_write_zeroes 00:17:33.987 ************************************ 00:17:33.987 23:12:53 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:33.987 23:12:53 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:33.987 23:12:53 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:33.987 23:12:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:33.987 ************************************ 00:17:33.987 START TEST bdev_json_nonenclosed 00:17:33.987 ************************************ 00:17:33.987 23:12:53 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:33.987 [2024-11-18 23:12:53.282942] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:33.987 [2024-11-18 23:12:53.283069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100875 ] 00:17:34.248 [2024-11-18 23:12:53.445459] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.248 [2024-11-18 23:12:53.515326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.248 [2024-11-18 23:12:53.515531] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:34.248 [2024-11-18 23:12:53.515569] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:34.248 [2024-11-18 23:12:53.515590] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:34.509 00:17:34.509 real 0m0.474s 00:17:34.509 user 0m0.230s 00:17:34.509 sys 0m0.139s 00:17:34.509 ************************************ 00:17:34.509 END TEST bdev_json_nonenclosed 00:17:34.509 ************************************ 00:17:34.509 23:12:53 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:34.509 23:12:53 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:34.509 23:12:53 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:34.509 23:12:53 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:34.509 23:12:53 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:34.509 23:12:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:34.509 ************************************ 00:17:34.509 START TEST bdev_json_nonarray 00:17:34.509 ************************************ 00:17:34.509 23:12:53 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:34.509 [2024-11-18 23:12:53.823552] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:34.509 [2024-11-18 23:12:53.823674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100906 ] 00:17:34.773 [2024-11-18 23:12:53.983807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.773 [2024-11-18 23:12:54.067751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.773 [2024-11-18 23:12:54.067873] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:34.773 [2024-11-18 23:12:54.067898] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:34.773 [2024-11-18 23:12:54.067912] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:35.033 00:17:35.033 real 0m0.482s 00:17:35.033 user 0m0.224s 00:17:35.033 sys 0m0.153s 00:17:35.033 23:12:54 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.033 23:12:54 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:35.033 ************************************ 00:17:35.033 END TEST bdev_json_nonarray 00:17:35.033 ************************************ 00:17:35.033 23:12:54 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:35.033 23:12:54 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:35.033 23:12:54 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:35.033 23:12:54 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:35.033 23:12:54 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:35.033 23:12:54 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:35.033 23:12:54 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:35.033 23:12:54 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:35.033 23:12:54 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:35.033 23:12:54 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:35.033 23:12:54 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:35.033 ************************************ 00:17:35.033 END TEST blockdev_raid5f 00:17:35.033 ************************************ 00:17:35.033 00:17:35.033 real 0m36.250s 00:17:35.033 user 0m48.415s 00:17:35.033 sys 0m5.192s 00:17:35.033 23:12:54 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.033 23:12:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:35.033 23:12:54 -- spdk/autotest.sh@194 -- # uname -s 00:17:35.033 23:12:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:35.033 23:12:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:35.033 23:12:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:35.033 23:12:54 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:35.033 23:12:54 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:35.033 23:12:54 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:35.033 23:12:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:35.033 23:12:54 -- common/autotest_common.sh@10 -- # set +x 00:17:35.293 23:12:54 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:35.293 23:12:54 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:17:35.293 23:12:54 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:35.293 23:12:54 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:35.293 23:12:54 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:17:35.293 23:12:54 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:17:35.293 23:12:54 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:17:35.293 23:12:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:35.293 23:12:54 -- common/autotest_common.sh@10 -- # set +x 00:17:35.293 23:12:54 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:17:35.293 23:12:54 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:17:35.293 23:12:54 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:17:35.293 23:12:54 -- common/autotest_common.sh@10 -- # set +x 00:17:37.833 INFO: APP EXITING 00:17:37.833 INFO: killing all VMs 00:17:37.833 INFO: killing vhost app 00:17:37.833 INFO: EXIT DONE 00:17:37.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:37.833 Waiting for block devices as requested 00:17:38.093 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:38.093 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:39.033 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:39.033 Cleaning 00:17:39.033 Removing: /var/run/dpdk/spdk0/config 00:17:39.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:39.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:39.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:39.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:39.033 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:39.033 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:39.033 Removing: /dev/shm/spdk_tgt_trace.pid69100 00:17:39.033 Removing: /var/run/dpdk/spdk0 00:17:39.033 Removing: /var/run/dpdk/spdk_pid100206 00:17:39.033 Removing: /var/run/dpdk/spdk_pid100245 00:17:39.033 Removing: /var/run/dpdk/spdk_pid100279 00:17:39.033 Removing: /var/run/dpdk/spdk_pid100502 00:17:39.033 Removing: /var/run/dpdk/spdk_pid100664 00:17:39.033 Removing: /var/run/dpdk/spdk_pid100751 00:17:39.033 Removing: /var/run/dpdk/spdk_pid100837 00:17:39.033 Removing: /var/run/dpdk/spdk_pid100875 00:17:39.033 Removing: /var/run/dpdk/spdk_pid100906 00:17:39.033 Removing: /var/run/dpdk/spdk_pid68931 00:17:39.033 Removing: /var/run/dpdk/spdk_pid69100 00:17:39.033 Removing: /var/run/dpdk/spdk_pid69302 00:17:39.033 Removing: /var/run/dpdk/spdk_pid69389 00:17:39.033 Removing: /var/run/dpdk/spdk_pid69418 00:17:39.033 Removing: /var/run/dpdk/spdk_pid69528 00:17:39.294 Removing: /var/run/dpdk/spdk_pid69542 00:17:39.294 Removing: /var/run/dpdk/spdk_pid69730 00:17:39.294 Removing: /var/run/dpdk/spdk_pid69809 00:17:39.294 Removing: /var/run/dpdk/spdk_pid69887 00:17:39.294 Removing: /var/run/dpdk/spdk_pid69983 00:17:39.294 Removing: /var/run/dpdk/spdk_pid70069 00:17:39.294 Removing: /var/run/dpdk/spdk_pid70103 00:17:39.294 Removing: /var/run/dpdk/spdk_pid70145 00:17:39.294 Removing: /var/run/dpdk/spdk_pid70210 00:17:39.294 Removing: /var/run/dpdk/spdk_pid70327 00:17:39.294 Removing: /var/run/dpdk/spdk_pid70752 00:17:39.294 Removing: /var/run/dpdk/spdk_pid70805 00:17:39.294 Removing: /var/run/dpdk/spdk_pid70852 00:17:39.294 Removing: /var/run/dpdk/spdk_pid70868 00:17:39.294 Removing: /var/run/dpdk/spdk_pid70937 00:17:39.294 Removing: /var/run/dpdk/spdk_pid70947 00:17:39.294 Removing: /var/run/dpdk/spdk_pid71011 00:17:39.294 Removing: /var/run/dpdk/spdk_pid71029 00:17:39.294 Removing: /var/run/dpdk/spdk_pid71076 00:17:39.294 Removing: /var/run/dpdk/spdk_pid71089 00:17:39.294 Removing: /var/run/dpdk/spdk_pid71137 00:17:39.294 Removing: /var/run/dpdk/spdk_pid71149 00:17:39.294 Removing: /var/run/dpdk/spdk_pid71287 00:17:39.294 Removing: /var/run/dpdk/spdk_pid71318 00:17:39.294 Removing: /var/run/dpdk/spdk_pid71407 00:17:39.294 Removing: /var/run/dpdk/spdk_pid72578 00:17:39.294 Removing: /var/run/dpdk/spdk_pid72773 00:17:39.294 Removing: /var/run/dpdk/spdk_pid72902 00:17:39.294 Removing: /var/run/dpdk/spdk_pid73512 00:17:39.294 Removing: /var/run/dpdk/spdk_pid73707 00:17:39.294 Removing: /var/run/dpdk/spdk_pid73836 00:17:39.294 Removing: /var/run/dpdk/spdk_pid74441 00:17:39.294 Removing: /var/run/dpdk/spdk_pid74760 00:17:39.294 Removing: /var/run/dpdk/spdk_pid74889 00:17:39.294 Removing: /var/run/dpdk/spdk_pid76224 00:17:39.294 Removing: /var/run/dpdk/spdk_pid76462 00:17:39.294 Removing: /var/run/dpdk/spdk_pid76591 00:17:39.294 Removing: /var/run/dpdk/spdk_pid77926 00:17:39.294 Removing: /var/run/dpdk/spdk_pid78163 00:17:39.294 Removing: /var/run/dpdk/spdk_pid78292 00:17:39.294 Removing: /var/run/dpdk/spdk_pid79622 00:17:39.294 Removing: /var/run/dpdk/spdk_pid80057 00:17:39.294 Removing: /var/run/dpdk/spdk_pid80191 00:17:39.294 Removing: /var/run/dpdk/spdk_pid81611 00:17:39.294 Removing: /var/run/dpdk/spdk_pid81858 00:17:39.294 Removing: /var/run/dpdk/spdk_pid81993 00:17:39.294 Removing: /var/run/dpdk/spdk_pid83424 00:17:39.294 Removing: /var/run/dpdk/spdk_pid83672 00:17:39.294 Removing: /var/run/dpdk/spdk_pid83801 00:17:39.294 Removing: /var/run/dpdk/spdk_pid85233 00:17:39.294 Removing: /var/run/dpdk/spdk_pid85704 00:17:39.294 Removing: /var/run/dpdk/spdk_pid85838 00:17:39.294 Removing: /var/run/dpdk/spdk_pid85965 00:17:39.294 Removing: /var/run/dpdk/spdk_pid86360 00:17:39.294 Removing: /var/run/dpdk/spdk_pid87072 00:17:39.294 Removing: /var/run/dpdk/spdk_pid87459 00:17:39.553 Removing: /var/run/dpdk/spdk_pid88135 00:17:39.553 Removing: /var/run/dpdk/spdk_pid88559 00:17:39.553 Removing: /var/run/dpdk/spdk_pid89297 00:17:39.553 Removing: /var/run/dpdk/spdk_pid89696 00:17:39.553 Removing: /var/run/dpdk/spdk_pid91618 00:17:39.553 Removing: /var/run/dpdk/spdk_pid92051 00:17:39.553 Removing: /var/run/dpdk/spdk_pid92470 00:17:39.553 Removing: /var/run/dpdk/spdk_pid94509 00:17:39.553 Removing: /var/run/dpdk/spdk_pid94978 00:17:39.553 Removing: /var/run/dpdk/spdk_pid95470 00:17:39.553 Removing: /var/run/dpdk/spdk_pid96503 00:17:39.553 Removing: /var/run/dpdk/spdk_pid96816 00:17:39.553 Removing: /var/run/dpdk/spdk_pid97736 00:17:39.553 Removing: /var/run/dpdk/spdk_pid98048 00:17:39.553 Removing: /var/run/dpdk/spdk_pid98965 00:17:39.553 Removing: /var/run/dpdk/spdk_pid99282 00:17:39.553 Removing: /var/run/dpdk/spdk_pid99948 00:17:39.553 Clean 00:17:39.553 23:12:58 -- common/autotest_common.sh@1451 -- # return 0 00:17:39.553 23:12:58 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:17:39.553 23:12:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:39.553 23:12:58 -- common/autotest_common.sh@10 -- # set +x 00:17:39.553 23:12:58 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:17:39.553 23:12:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:39.553 23:12:58 -- common/autotest_common.sh@10 -- # set +x 00:17:39.812 23:12:58 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:39.812 23:12:58 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:39.812 23:12:58 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:39.813 23:12:58 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:17:39.813 23:12:58 -- spdk/autotest.sh@394 -- # hostname 00:17:39.813 23:12:58 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:39.813 geninfo: WARNING: invalid characters removed from testname! 00:18:01.802 23:13:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:05.112 23:13:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:06.492 23:13:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:09.037 23:13:28 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:10.963 23:13:30 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:13.502 23:13:32 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:14.883 23:13:34 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:15.144 23:13:34 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:18:15.144 23:13:34 -- common/autotest_common.sh@1681 -- $ lcov --version 00:18:15.144 23:13:34 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:18:15.144 23:13:34 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:18:15.144 23:13:34 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:18:15.144 23:13:34 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:18:15.144 23:13:34 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:18:15.144 23:13:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:18:15.144 23:13:34 -- scripts/common.sh@336 -- $ read -ra ver1 00:18:15.144 23:13:34 -- scripts/common.sh@337 -- $ IFS=.-: 00:18:15.144 23:13:34 -- scripts/common.sh@337 -- $ read -ra ver2 00:18:15.144 23:13:34 -- scripts/common.sh@338 -- $ local 'op=<' 00:18:15.144 23:13:34 -- scripts/common.sh@340 -- $ ver1_l=2 00:18:15.144 23:13:34 -- scripts/common.sh@341 -- $ ver2_l=1 00:18:15.144 23:13:34 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:18:15.144 23:13:34 -- scripts/common.sh@344 -- $ case "$op" in 00:18:15.144 23:13:34 -- scripts/common.sh@345 -- $ : 1 00:18:15.144 23:13:34 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:18:15.144 23:13:34 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.144 23:13:34 -- scripts/common.sh@365 -- $ decimal 1 00:18:15.144 23:13:34 -- scripts/common.sh@353 -- $ local d=1 00:18:15.144 23:13:34 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:18:15.144 23:13:34 -- scripts/common.sh@355 -- $ echo 1 00:18:15.144 23:13:34 -- scripts/common.sh@365 -- $ ver1[v]=1 00:18:15.144 23:13:34 -- scripts/common.sh@366 -- $ decimal 2 00:18:15.144 23:13:34 -- scripts/common.sh@353 -- $ local d=2 00:18:15.144 23:13:34 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:18:15.144 23:13:34 -- scripts/common.sh@355 -- $ echo 2 00:18:15.144 23:13:34 -- scripts/common.sh@366 -- $ ver2[v]=2 00:18:15.144 23:13:34 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:15.144 23:13:34 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:18:15.144 23:13:34 -- scripts/common.sh@368 -- $ return 0 00:18:15.144 23:13:34 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.144 23:13:34 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:18:15.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.144 --rc genhtml_branch_coverage=1 00:18:15.144 --rc genhtml_function_coverage=1 00:18:15.144 --rc genhtml_legend=1 00:18:15.144 --rc geninfo_all_blocks=1 00:18:15.144 --rc geninfo_unexecuted_blocks=1 00:18:15.144 00:18:15.144 ' 00:18:15.144 23:13:34 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:18:15.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.144 --rc genhtml_branch_coverage=1 00:18:15.144 --rc genhtml_function_coverage=1 00:18:15.144 --rc genhtml_legend=1 00:18:15.144 --rc geninfo_all_blocks=1 00:18:15.144 --rc geninfo_unexecuted_blocks=1 00:18:15.144 00:18:15.144 ' 00:18:15.144 23:13:34 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:18:15.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.144 --rc genhtml_branch_coverage=1 00:18:15.144 --rc genhtml_function_coverage=1 00:18:15.144 --rc genhtml_legend=1 00:18:15.144 --rc geninfo_all_blocks=1 00:18:15.144 --rc geninfo_unexecuted_blocks=1 00:18:15.144 00:18:15.144 ' 00:18:15.144 23:13:34 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:18:15.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.144 --rc genhtml_branch_coverage=1 00:18:15.144 --rc genhtml_function_coverage=1 00:18:15.144 --rc genhtml_legend=1 00:18:15.144 --rc geninfo_all_blocks=1 00:18:15.144 --rc geninfo_unexecuted_blocks=1 00:18:15.144 00:18:15.144 ' 00:18:15.144 23:13:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:15.144 23:13:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:18:15.144 23:13:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:15.144 23:13:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.144 23:13:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.144 23:13:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.144 23:13:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.145 23:13:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.145 23:13:34 -- paths/export.sh@5 -- $ export PATH 00:18:15.145 23:13:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.145 23:13:34 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:18:15.145 23:13:34 -- common/autobuild_common.sh@479 -- $ date +%s 00:18:15.145 23:13:34 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731971614.XXXXXX 00:18:15.145 23:13:34 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731971614.6YMKUL 00:18:15.145 23:13:34 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:18:15.145 23:13:34 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:18:15.145 23:13:34 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:18:15.145 23:13:34 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:18:15.145 23:13:34 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:18:15.145 23:13:34 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:18:15.145 23:13:34 -- common/autobuild_common.sh@495 -- $ get_config_params 00:18:15.145 23:13:34 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:18:15.145 23:13:34 -- common/autotest_common.sh@10 -- $ set +x 00:18:15.145 23:13:34 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:18:15.145 23:13:34 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:18:15.145 23:13:34 -- pm/common@17 -- $ local monitor 00:18:15.145 23:13:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:15.145 23:13:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:15.145 23:13:34 -- pm/common@25 -- $ sleep 1 00:18:15.145 23:13:34 -- pm/common@21 -- $ date +%s 00:18:15.145 23:13:34 -- pm/common@21 -- $ date +%s 00:18:15.145 23:13:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731971614 00:18:15.145 23:13:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731971614 00:18:15.406 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731971614_collect-cpu-load.pm.log 00:18:15.406 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731971614_collect-vmstat.pm.log 00:18:16.347 23:13:35 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:18:16.347 23:13:35 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:18:16.347 23:13:35 -- spdk/autopackage.sh@14 -- $ timing_finish 00:18:16.347 23:13:35 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:16.347 23:13:35 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:16.347 23:13:35 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:16.347 23:13:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:18:16.347 23:13:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:18:16.347 23:13:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:18:16.347 23:13:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:16.347 23:13:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:18:16.347 23:13:35 -- pm/common@44 -- $ pid=102423 00:18:16.347 23:13:35 -- pm/common@50 -- $ kill -TERM 102423 00:18:16.347 23:13:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:16.347 23:13:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:18:16.347 23:13:35 -- pm/common@44 -- $ pid=102425 00:18:16.347 23:13:35 -- pm/common@50 -- $ kill -TERM 102425 00:18:16.347 + [[ -n 6156 ]] 00:18:16.347 + sudo kill 6156 00:18:16.356 [Pipeline] } 00:18:16.371 [Pipeline] // timeout 00:18:16.376 [Pipeline] } 00:18:16.389 [Pipeline] // stage 00:18:16.393 [Pipeline] } 00:18:16.406 [Pipeline] // catchError 00:18:16.414 [Pipeline] stage 00:18:16.416 [Pipeline] { (Stop VM) 00:18:16.426 [Pipeline] sh 00:18:16.708 + vagrant halt 00:18:19.245 ==> default: Halting domain... 00:18:27.392 [Pipeline] sh 00:18:27.702 + vagrant destroy -f 00:18:30.250 ==> default: Removing domain... 00:18:30.263 [Pipeline] sh 00:18:30.548 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:30.558 [Pipeline] } 00:18:30.573 [Pipeline] // stage 00:18:30.579 [Pipeline] } 00:18:30.593 [Pipeline] // dir 00:18:30.598 [Pipeline] } 00:18:30.612 [Pipeline] // wrap 00:18:30.619 [Pipeline] } 00:18:30.632 [Pipeline] // catchError 00:18:30.642 [Pipeline] stage 00:18:30.644 [Pipeline] { (Epilogue) 00:18:30.657 [Pipeline] sh 00:18:30.942 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:35.158 [Pipeline] catchError 00:18:35.160 [Pipeline] { 00:18:35.176 [Pipeline] sh 00:18:35.462 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:35.462 Artifacts sizes are good 00:18:35.472 [Pipeline] } 00:18:35.486 [Pipeline] // catchError 00:18:35.498 [Pipeline] archiveArtifacts 00:18:35.505 Archiving artifacts 00:18:35.606 [Pipeline] cleanWs 00:18:35.621 [WS-CLEANUP] Deleting project workspace... 00:18:35.621 [WS-CLEANUP] Deferred wipeout is used... 00:18:35.628 [WS-CLEANUP] done 00:18:35.630 [Pipeline] } 00:18:35.646 [Pipeline] // stage 00:18:35.652 [Pipeline] } 00:18:35.667 [Pipeline] // node 00:18:35.673 [Pipeline] End of Pipeline 00:18:35.729 Finished: SUCCESS